Updates from: 06/24/2022 01:11:27
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Password Reset Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-password-reset-policy.md
Declare your claims in the [claims schema](claimsschema.md). Open the extensions
</BuildingBlocks> --> ```
-A claims transformation technical profile initiates the **isForgotPassword** claim. The technical profile is referenced later. When invoked, it sets the value of the **isForgotPassword** claim to `true`. Find the **ClaimsProviders** element. If the element doesn't exist, add it. Then add the following claims provider:
+### Add the technical profiles
+A claims transformation technical profile accesses the `isForgotPassword` claim. The technical profile is referenced later. When it's invoked, it sets the value of the `isForgotPassword` claim to `true`. Find the **ClaimsProviders** element (if the element doesn't exist, create it), and then add the following claims provider:
```xml <!--
A claims transformation technical profile initiates the **isForgotPassword** cla
<Item Key="setting.forgotPasswordLinkOverride">ForgotPasswordExchange</Item> </Metadata> </TechnicalProfile>
+ <TechnicalProfile Id="LocalAccountWritePasswordUsingObjectId">
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-AAD" />
+ </TechnicalProfile>
</TechnicalProfiles> </ClaimsProvider> <!--
A claims transformation technical profile initiates the **isForgotPassword** cla
The **SelfAsserted-LocalAccountSignin-Email** technical profile **setting.forgotPasswordLinkOverride** defines the password reset claims exchange that executes in your user journey.
+The **LocalAccountWritePasswordUsingObjectId** technical profile **UseTechnicalProfileForSessionManagement** `SM-AAD` session manager is required for the user to preform subsequent logins successfully under [SSO](./custom-policy-reference-sso.md) conditions.
+ ### Add the password reset sub journey The user can now sign in, sign up, and perform password reset in your user journey. To better organize the user journey, you can use a [sub journey](subjourneys.md) to handle the password reset flow.
active-directory-b2c Conditional Access User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/conditional-access-user-flow.md
The following template can be used to create a Conditional Access policy with di
## Template 3: Block locations with Conditional Access
-With the location condition in Conditional Access, you can control access to your cloud apps based on the network location of a user. More information about the location condition in Conditional Access can be found in the article,
-[Using the location condition in a Conditional Access policy](../active-directory/conditional-access/location-condition.md
-
-Configure Conditional Access through Azure portal or Microsoft Graph APIs to enable a Conditional Access policy blocking access to specific locations.
-For more information about the location condition in Conditional Access can be found in the article, [Using the location condition in a Conditional Access policy](../active-directory/conditional-access/location-condition.md)
+With the location condition in Conditional Access, you can control access to your cloud apps based on the network location of a user. Configure Conditional Access via the Azure portal or Microsoft Graph APIs to enable a Conditional Access policy blocking access to specific locations. For more information, see [Using the location condition in a Conditional Access policy](../active-directory/conditional-access/location-condition.md)
### Define locations
active-directory-b2c Configure Authentication Sample Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-spa-app.md
In your own environment, if your SPA app uses MSAL.js 1.3 or earlier and the imp
1. In the left menu, under **Manage**, select **Authentication**.
-1. Under **Implicit grant and hybrid flows**, select both the **Access tokens (used for implicit flows)** and **D tokens (used for implicit and hybrid flows)** check boxes.
+1. Under **Implicit grant and hybrid flows**, select both the **Access tokens (used for implicit flows)** and **ID tokens (used for implicit and hybrid flows)** check boxes.
1. Select **Save**.
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Use the steps below to provision roles for a user to your application. Note that
- **SingleAppRoleAssignment** - **When to use:** Use the SingleAppRoleAssignment expression to provision a single role for a user and to specify the primary role.
- - **How to configure:** Use the steps described above to navigate to the attribute mappings page and use the SingleAppRoleAssignment expression to map to the roles attribute. There are three role attributes to choose from: (roles[primary eq "True"].display, roles[primary eq "True].type, and roles[primary eq "True"].value). You can choose to include any or all of the role attributes in your mappings. If you would like to include more than one, just add a new mapping and include it as the target attribute.
-
+ - **How to configure:** Use the steps described above to navigate to the attribute mappings page and use the SingleAppRoleAssignment expression to map to the roles attribute. There are three role attributes to choose from (`roles[primary eq "True"].display`, `roles[primary eq "True"].type`, and `roles[primary eq "True"].value`). You can choose to include any or all of the role attributes in your mappings. If you would like to include more than one, just add a new mapping and include it as the target attribute.
+ ![Add SingleAppRoleAssignment](./media/customize-application-attributes/edit-attribute-singleapproleassignment.png) - **Things to consider** - Ensure that multiple roles are not assigned to a user. We cannot guarantee which role will be provisioned.
active-directory Concept Authentication Phone Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-phone-options.md
Previously updated : 06/09/2022 Last updated : 06/23/2022
With phone call verification during SSPR or Azure AD Multi-Factor Authentication
If you have problems with phone authentication for Azure AD, review the following troubleshooting steps: * ΓÇ£You've hit our limit on verification callsΓÇ¥ or ΓÇ£YouΓÇÖve hit our limit on text verification codesΓÇ¥ error messages during sign-in
- * Microsoft may limit repeated authentication attempts that are performed by the same user or organization in a short period of time. This limitation does not apply to the Microsoft Entra Authenticator app or verification codes. If you have hit these limits, you can use the Authenticator App, verification code or try to sign in again in a few minutes.
+ * Microsoft may limit repeated authentication attempts that are performed by the same user or organization in a short period of time. This limitation does not apply to Microsoft Authenticator or verification codes. If you have hit these limits, you can use the Authenticator App, verification code or try to sign in again in a few minutes.
* "Sorry, we're having trouble verifying your account" error message during sign-in * Microsoft may limit or block voice or SMS authentication attempts that are performed by the same user, phone number, or organization due to high number of voice or SMS authentication attempts. If you are experiencing this error, you can try another method, such as Authenticator App or verification code, or reach out to your admin for support. * Blocked caller ID on a single device.
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md
The two-gate policy requires two pieces of authentication data, such as an email
* A custom domain has been configured for your Azure AD tenant, such as *contoso.com*; or * Azure AD Connect is synchronizing identities from your on-premises directory
-You can disable the use of SSPR for administrator accounts using the [Set-MsolCompanySettings](/powershell/module/msonline/set-msolcompanysettings) PowerShell cmdlet. The `-SelfServePasswordResetEnabled $False` parameter disables SSPR for administrators.
+You can disable the use of SSPR for administrator accounts using the [Set-MsolCompanySettings](/powershell/module/msonline/set-msolcompanysettings) PowerShell cmdlet. The `-SelfServePasswordResetEnabled $False` parameter disables SSPR for administrators. Policy changes to disable or enable SSPR for administrator accounts can take up to 60 minutes to take effect.
### Exceptions
active-directory How To Mfa Additional Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-additional-context.md
Title: Use additional context in Microsoft Entra Authenticator notifications (Preview) - Azure Active Directory
+ Title: Use additional context in Microsoft Authenticator notifications (Preview) - Azure Active Directory
description: Learn how to use additional context in MFA notifications Previously updated : 06/08/2022 Last updated : 06/23/2022 # Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
-# How to use additional context in Microsoft Entra Authenticator app notifications (Preview) - Authentication Methods Policy
+# How to use additional context in Microsoft Authenticator app notifications (Preview) - Authentication Methods Policy
-This topic covers how to improve the security of user sign-in by adding the application and location in Microsoft Entra Authenticator app push notifications.
+This topic covers how to improve the security of user sign-in by adding the application and location in Microsoft Authenticator app push notifications.
## Prerequisites
To turn off additional context, you'll need to PATCH remove **displayAppInformat
To enable additional context in the Azure AD portal, complete the following steps:
-1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Entra Authenticator**.
+1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Authenticator**.
1. Select the target users, click the three dots on the right, and click **Configure**. ![Screenshot of how to configure number match.](media/howto-authentication-passwordless-phone/configure.png)
Additional context is not supported for Network Policy Server (NPS).
## Next steps
-[Authentication methods in Azure Active Directory - Microsoft Entra Authenticator app](concept-authentication-authenticator-app.md)
+[Authentication methods in Azure Active Directory - Microsoft Authenticator app](concept-authentication-authenticator-app.md)
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 06/09/2022 Last updated : 06/23/2022
# How to use number matching in multifactor authentication (MFA) notifications (Preview) - Authentication Methods Policy
-This topic covers how to enable number matching in Microsoft Entra Authenticator push notifications to improve user sign-in security.
+This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security.
>[!NOTE] >Number matching is a key security upgrade to traditional second factor notifications in the Authenticator app that will be enabled by default for all tenants a few months after general availability (GA).<br>
To turn number matching off, you will need to PATCH remove **numberMatchingRequi
To enable number matching in the Azure AD portal, complete the following steps:
-1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Entra Authenticator**.
+1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Authenticator**.
1. Select the target users, click the three dots on the right, and click **Configure**. ![Screenshot of configuring number match.](media/howto-authentication-passwordless-phone/configure.png)
active-directory How To Mfa Registration Campaign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md
Title: Nudge users to set up Microsoft Entra Authenticator app - Azure Active Directory
-description: Learn how to move your organization away from less secure authentication methods to the Microsoft Entra Authenticator app
+ Title: Nudge users to set up Microsoft Authenticator - Azure Active Directory
+description: Learn how to move your organization away from less secure authentication methods to Microsoft Authenticator
Previously updated : 06/09/2022 Last updated : 06/23/2022
-# Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Entra Authenticator app in Azure AD to improve and secure user sign-in events.
+# Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
-# How to run a registration campaign to set up Microsoft Entra Authenticator - Microsoft Entra Authenticator app
+# How to run a registration campaign to set up Microsoft Authenticator - Microsoft Authenticator
-You can nudge users to set up the Microsoft Entra Authenticator app during sign-in. Users will go through their regular sign-in, perform multifactor authentication as usual, and then be prompted to set up the Microsoft Entra Authenticator app. You can include or exclude users or groups to control who gets nudged to set up the app. This allows targeted campaigns to move users from less secure authentication methods to the Authenticator app.
+You can nudge users to set up Microsoft Authenticator during sign-in. Users will go through their regular sign-in, perform multifactor authentication as usual, and then be prompted to set up Microsoft Authenticator. You can include or exclude users or groups to control who gets nudged to set up the app. This allows targeted campaigns to move users from less secure authentication methods to the Authenticator app.
In addition to choosing who can be nudged, you can define how many days a user can postpone, or "snooze", the nudge. If a user taps **Not now** to snooze the app setup, they'll be nudged again on the next MFA attempt after the snooze duration has elapsed.
In addition to choosing who can be nudged, you can define how many days a user c
- Users can't have already set up the Authenticator app for push notifications on their account. - Admins need to enable users for the Authenticator app using one of these policies: - MFA Registration Policy: Users will need to be enabled for **Notification through mobile app**.
- - Authentication Methods Policy: Users will need to be enabled for the Authenticator app and the Authentication mode set to **Any** or **Push**. If the policy is set to **Passwordless**, the user won't be eligible for the nudge. For more information about how to set the Authentication mode, see [Enable passwordless sign-in with the Microsoft Entra Authenticator app](howto-authentication-passwordless-phone.md).
+ - Authentication Methods Policy: Users will need to be enabled for the Authenticator app and the Authentication mode set to **Any** or **Push**. If the policy is set to **Passwordless**, the user won't be eligible for the nudge. For more information about how to set the Authentication mode, see [Enable passwordless sign-in with Microsoft Authenticator](howto-authentication-passwordless-phone.md).
## User experience
In addition to choosing who can be nudged, you can define how many days a user c
1. User taps **Next** and steps through the Authenticator app setup. 1. First download the app.
- ![User downloads the Microsoft Entra Authenticator app](./media/how-to-nudge-authenticator-app/download.png)
+ ![User downloads Microsoft Authenticator](./media/how-to-nudge-authenticator-app/download.png)
1. See how to set up the Authenticator app.
- ![User sets up the Microsoft Entra Authenticator app](./media/how-to-nudge-authenticator-app/setup.png)
+ ![User sets up Microsoft Authenticator](./media/how-to-nudge-authenticator-app/setup.png)
1. Scan the QR Code.
It's the same as snoozing.
## Next steps
-[Enable passwordless sign-in with the Microsoft Entra Authenticator app](howto-authentication-passwordless-phone.md)
+[Enable passwordless sign-in with Microsoft Authenticator](howto-authentication-passwordless-phone.md)
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Title: Passwordless sign-in with the Microsoft Entra Authenticator app - Azure Active Directory
-description: Enable passwordless sign-in to Azure AD using the Microsoft Entra Authenticator app
+ Title: Passwordless sign-in with Microsoft Authenticator - Azure Active Directory
+description: Enable passwordless sign-in to Azure AD using Microsoft Authenticator
Previously updated : 06/15/2022 Last updated : 06/23/2022
-# Enable passwordless sign-in with the Microsoft Entra Authenticator app
+# Enable passwordless sign-in with Microsoft Authenticator
-The Microsoft Entra Authenticator app can be used to sign in to any Azure AD account without using a password. Microsoft Authenticator uses key-based authentication to enable a user credential that is tied to a device, where the device uses a PIN or biometric. [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-identity-verification) uses a similar technology.
+Microsoft Authenticator can be used to sign in to any Azure AD account without using a password. Microsoft Authenticator uses key-based authentication to enable a user credential that is tied to a device, where the device uses a PIN or biometric. [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-identity-verification) uses a similar technology.
This authentication technology can be used on any device platform, including mobile. This technology can also be used with any app or website that integrates with Microsoft Authentication Libraries.
To use passwordless authentication in Azure AD, first enable the combined regist
### Enable passwordless phone sign-in authentication methods
-Azure AD lets you choose which authentication methods can be used during the sign-in process. Users then register for the methods they'd like to use. The **Microsoft Entra Authenticator** authentication method policy manages both the traditional push MFA method, as well as the passwordless authentication method.
+Azure AD lets you choose which authentication methods can be used during the sign-in process. Users then register for the methods they'd like to use. The **Microsoft Authenticator** authentication method policy manages both the traditional push MFA method, as well as the passwordless authentication method.
To enable the authentication method for passwordless phone sign-in, complete the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com) with an *authentication policy administrator* account. 1. Search for and select *Azure Active Directory*, then browse to **Security** > **Authentication methods** > **Policies**.
-1. Under **Microsoft Entra Authenticator**, choose the following options:
+1. Under **Microsoft Authenticator**, choose the following options:
1. **Enable** - Yes or No 1. **Target** - All users or Select users 1. Each added group or user is enabled by default to use Microsoft Authenticator in both passwordless and push notification modes ("Any" mode). To change this, for each row:
Users register themselves for the passwordless authentication method of Azure AD
1. Sign in, then click **Add method** > **Authenticator app** > **Add** to add Microsoft Authenticator. 1. Follow the instructions to install and configure the Microsoft Authenticator app on your device. 1. Select **Done** to complete Authenticator configuration.
-1. In **Microsoft Entra Authenticator**, choose **Enable phone sign-in** from the drop-down menu for the account registered.
+1. In **Microsoft Authenticator**, choose **Enable phone sign-in** from the drop-down menu for the account registered.
1. Follow the instructions in the app to finish registering the account for passwordless phone sign-in.
-An organization can direct its users to sign in with their phones, without using a password. For further assistance configuring Microsoft Authenticator and enabling phone sign-in, see [Sign in to your accounts using the Microsoft Entra Authenticator app](https://support.microsoft.com/account-billing/sign-in-to-your-accounts-using-the-microsoft-authenticator-app-582bdc07-4566-4c97-a7aa-56058122714c).
+An organization can direct its users to sign in with their phones, without using a password. For further assistance configuring Microsoft Authenticator and enabling phone sign-in, see [Sign in to your accounts using the Microsoft Authenticator app](https://support.microsoft.com/account-billing/sign-in-to-your-accounts-using-the-microsoft-authenticator-app-582bdc07-4566-4c97-a7aa-56058122714c).
> [!NOTE] > Users who aren't allowed by policy to use phone sign-in are no longer able to enable it within Microsoft Authenticator.
The user is then presented with a number. The app prompts the user to authentica
After the user has utilized passwordless phone sign-in, the app continues to guide the user through this method. However, the user will see the option to choose another method. ## Known Issues
active-directory Howto Mfaserver Deploy Mobileapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy-mobileapp.md
Title: Azure MFA Server Mobile App Web Service - Azure Active Directory
-description: Configure MFA server to send push notifications to users with the Microsoft Entra Authenticator App.
+description: Configure MFA server to send push notifications to users with the Microsoft Authenticator App.
Previously updated : 06/09/2022 Last updated : 06/23/2022
# Enable mobile app authentication with Azure Multi-Factor Authentication Server
-The Microsoft Entra Authenticator app offers an additional out-of-band verification option. Instead of placing an automated phone call or SMS to the user during login, Azure Multi-Factor Authentication pushes a notification to the Authenticator app on the user's smartphone or tablet. The user simply taps **Verify** (or enters a PIN and taps "Authenticate") in the app to complete their sign-in.
+The Microsoft Authenticator app offers an additional out-of-band verification option. Instead of placing an automated phone call or SMS to the user during login, Azure Multi-Factor Authentication pushes a notification to the Authenticator app on the user's smartphone or tablet. The user simply taps **Verify** (or enters a PIN and taps "Authenticate") in the app to complete their sign-in.
Using a mobile app for two-step verification is preferred when phone reception is unreliable. If you use the app as an OATH token generator, it doesn't require any network or internet connection.
active-directory Block Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/block-legacy-authentication.md
description: Learn how to improve your security posture by blocking legacy authe
Previously updated : 02/14/2022 Last updated : 06/21/2022
Alex Weinert, Director of Identity Security at Microsoft, in his March 12, 2020
> For MFA to be effective, you also need to block legacy authentication. This is because legacy authentication protocols like POP, SMTP, IMAP, and MAPI can't enforce MFA, making them preferred entry points for adversaries attacking your organization... >
->...The numbers on legacy authentication from an analysis of Azure Active Directory (Azure AD) traffic are stark:
+> ...The numbers on legacy authentication from an analysis of Azure Active Directory (Azure AD) traffic are stark:
> > - More than 99 percent of password spray attacks use legacy authentication protocols > - More than 97 percent of credential stuffing attacks use legacy authentication
This article assumes that you're familiar with the [basic concepts](overview.md)
## Scenario description
-Azure AD supports several of the most widely used authentication and authorization protocols including legacy authentication. Legacy authentication refers to basic authentication, a widely used industry-standard method for collecting user name and password information. Typically, legacy authentication clients can't enforce any type of second factor authentication. Examples of applications that commonly or only use legacy authentication are:
+Azure AD supports the most widely used authentication and authorization protocols including legacy authentication. Legacy authentication can't prompt users for second factor authentication or other authentication requirements needed to satisfy conditional access policies, directly. This authentication pattern includes basic authentication, a widely used industry-standard method for collecting user name and password information. Examples of applications that commonly or only use legacy authentication are:
- Microsoft Office 2013 or older. - Apps using mail protocols like POP, IMAP, and SMTP AUTH.
Single factor authentication (for example, username and password) isn't enough t
How can you prevent apps using legacy authentication from accessing your tenant's resources? The recommendation is to just block them with a Conditional Access policy. If necessary, you allow only certain users and specific network locations to use apps that are based on legacy authentication.
-Conditional Access policies are enforced after the first-factor authentication has been completed. Therefore, Conditional Access isn't intended as a first line defense for scenarios like denial-of-service (DoS) attacks, but can utilize signals from these events (for example, the sign-in risk level, location of the request, and so on) to determine access.
- ## Implementation
-This section explains how to configure a Conditional Access policy to block legacy authentication.
+This section explains how to configure a Conditional Access policy to block legacy authentication.
### Messaging protocols that support legacy authentication
For more information about these authentication protocols and services, see [Sig
### Identify legacy authentication use
-Before you can block legacy authentication in your directory, you need to first understand if your users have apps that use legacy authentication and how it affects your overall directory. Azure AD sign-in logs can be used to understand if you're using legacy authentication.
+Before you can block legacy authentication in your directory, you need to first understand if your users have clients that use legacy authentication. Below, you'll find useful information to identify and triage where clients are using legacy authentication.
+
+#### Indicators from Azure AD
1. Navigate to the **Azure portal** > **Azure Active Directory** > **Sign-in logs**. 1. Add the Client App column if it isn't shown by clicking on **Columns** > **Client App**.
Before you can block legacy authentication in your directory, you need to first
Filtering will only show you sign-in attempts that were made by legacy authentication protocols. Clicking on each individual sign-in attempt will show you more details. The **Client App** field under the **Basic Info** tab will indicate which legacy authentication protocol was used.
-These logs will indicate which users are still depending on legacy authentication and which applications are using legacy protocols to make authentication requests. For users that don't appear in these logs and are confirmed to not be using legacy authentication, implement a Conditional Access policy for these users only.
+These logs will indicate where users are using clients that are still depending on legacy authentication. For users that don't appear in these logs and are confirmed to not be using legacy authentication, implement a Conditional Access policy for these users only.
+
+Additionally, to help triage legacy authentication within your tenant use the [Sign-ins using legacy authentication workbook](../reports-monitoring/workbook-legacy%20authentication.md).
+
+#### Indicators from client
+
+To determine if a client is using legacy or modern authentication based on the dialog box presented at sign-in, see the article [Deprecation of Basic authentication in Exchange Online](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online#authentication-dialog).
+
+## Important considerations
+
+Many clients that previously only supported legacy authentication now support modern authentication. Clients that support both legacy and modern authentication may require configuration update to move from legacy to modern authentication. If you see **modern mobile**, **desktop client** or **browser** for a client in the Azure AD logs, it's using modern authentication. If it has a specific client or protocol name, such as **Exchange ActiveSync**, it's using legacy authentication. The client types in Conditional Access, Azure AD Sign-in logs, and the legacy authentication workbook distinguish between modern and legacy authentication clients for you.
+
+- Clients that support modern authentication but aren't configured to use modern authentication should be updated or reconfigured to use modern authentication.
+- All clients that don't support modern authentication should be replaced.
+
+> [!IMPORTANT]
+>
+> **Exchange Active Sync with Certificate-based authentication(CBA)**
+>
+> When implementing Exchange Active Sync (EAS) with CBA, configure clients to use modern authentication. Clients not using modern authentication for EAS with CBA **are not blocked** with [Deprecation of Basic authentication in Exchange Online](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online). However, these clients **are blocked** by Conditional Access policies configured to block legacy authentication.
+>
+>For more Information on implementing support for CBA with Azure AD and modern authentication See: [How to configure Azure AD certificate-based authentication (Preview)](../authentication/how-to-certificate-based-authentication.md). As another option, CBA performed at a federation server can be used with modern authentication.
++
+If you're using Microsoft Intune, you might be able to change the authentication type using the email profile you push or deploy to your devices. If you're using iOS devices (iPhones and iPads), you should take a look at [Add e-mail settings for iOS and iPadOS devices in Microsoft Intune](/mem/intune/configuration/email-settings-ios).
-## Block legacy authentication
+## Block legacy authentication
There are two ways to use Conditional Access policies to block legacy authentication.
You can select all available grant controls for the **Other clients** condition;
- [Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md) - If you aren't familiar with configuring Conditional Access policies yet, see [require MFA for specific apps with Azure Active Directory Conditional Access](../authentication/tutorial-enable-azure-mfa.md) for an example. - For more information about modern authentication support, see [How modern authentication works for Office client apps](/office365/enterprise/modern-auth-for-office-2013-and-2016) -- [How to set up a multifunction device or application to send email using Microsoft 365](/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365)
+- [How to set up a multifunction device or application to send email using Microsoft 365](/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365)
active-directory Howto Continuous Access Evaluation Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-continuous-access-evaluation-troubleshoot.md
description: Troubleshoot and respond to changes in user state faster with conti
- Previously updated : 01/25/2022+ Last updated : 06/09/2022
The potential IP address mismatch between Azure AD & resource provider table all
This workbook table sheds light on these scenarios by displaying the respective IP addresses and whether a CAE token was issued during the session.
+#### Continuous access evaluation insights per sign-in
+
+The continuous access evaluation insights per sign-in page in the workbook connects multiple requests from the sign-in logs and displays a single request where a CAE token was issued.
+
+This workbook can come in handy, for example, when: A user opens Outlook on their desktop and attempts to access resources inside of Exchange Online. This sign-in action may map to multiple interactive and non-interactive sign-in requests in the logs making issues hard to diagnose.
+ #### IP address configuration Your identity provider and resource providers may see different IP addresses. This mismatch may happen because of the following examples:
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
Now that youΓÇÖve created the VM, you need to configure Azure RBAC policy to det
- **Virtual Machine Administrator Login**: Users with this role assigned can log in to an Azure virtual machine with administrator privileges. - **Virtual Machine User Login**: Users with this role assigned can log in to an Azure virtual machine with regular user privileges.
-To log in to a VM over SSH, you must have the Virtual Machine Administrator Login or Virtual Machine User Login role. An Azure user with the Owner or Contributor roles assigned for a VM donΓÇÖt automatically have privileges to Azure AD login to the VM over SSH. This separation is to provide audited separation between the set of people who control virtual machines versus the set of people who can access virtual machines.
+To log in to a VM over SSH, you must have the Virtual Machine Administrator Login or Virtual Machine User Login role to the Resource Group containing the VM and its associated Virtual Network, Network Interface, Public IP Address or Load Balancer resources. An Azure user with the Owner or Contributor roles assigned for a VM donΓÇÖt automatically have privileges to Azure AD login to the VM over SSH. This separation is to provide audited separation between the set of people who control virtual machines versus the set of people who can access virtual machines.
There are multiple ways you can configure role assignments for VM, as an example you can use:
There are multiple ways you can configure role assignments for VM, as an example
To configure role assignments for your Azure AD enabled Linux VMs:
+1. Select the **Resource Group** containing the VM and its associated Virtual Network, Network Interface, Public IP Address or Load Balancer resource.
+ 1. Select **Access control (IAM)**. 1. Select **Add** > **Add role assignment** to open the Add role assignment page.
The following example uses [az role assignment create](/cli/azure/role/assignmen
```azurecli-interactive username=$(az account show --query user.name --output tsv)
-vm=$(az vm show --resource-group AzureADLinuxVM --name myVM --query id -o tsv)
+rg=$(az group show --resource-group myResourceGroup --query id -o tsv)
az role assignment create \ --role "Virtual Machine Administrator Login" \ --assignee $username \
- --scope $vm
+ --scope $rg
``` > [!NOTE]
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Now that you've created the VM, you need to configure Azure RBAC policy to deter
- **Virtual Machine User Login**: Users with this role assigned can log in to an Azure virtual machine with regular user privileges. > [!NOTE]
-> To allow a user to log in to the VM over RDP, you must assign either the Virtual Machine Administrator Login or Virtual Machine User Login role. An Azure user with the Owner or Contributor roles assigned for a VM do not automatically have privileges to log in to the VM over RDP. This is to provide audited separation between the set of people who control virtual machines versus the set of people who can access virtual machines.
+> To allow a user to log in to the VM over RDP, you must assign either the Virtual Machine Administrator Login or Virtual Machine User Login role to the Resource Group containing the VM and its associated Virtual Network, Network Interface, Public IP Address or Load Balancer resources. An Azure user with the Owner or Contributor roles assigned for a VM do not automatically have privileges to log in to the VM over RDP. This is to provide audited separation between the set of people who control virtual machines versus the set of people who can access virtual machines.
There are multiple ways you can configure role assignments for VM:
There are multiple ways you can configure role assignments for VM:
To configure role assignments for your Azure AD enabled Windows Server 2019 Datacenter VMs:
+1. Select the **Resource Group** containing the VM and its associated Virtual Network, Network Interface, Public IP Address or Load Balancer resource.
+ 1. Select **Access control (IAM)**. 1. Select **Add** > **Add role assignment** to open the Add role assignment page.
The following example uses [az role assignment create](/cli/azure/role/assignmen
```AzureCLI $username=$(az account show --query user.name --output tsv)
-$vm=$(az vm show --resource-group myResourceGroup --name myVM --query id -o tsv)
+$rg=$(az group show --resource-group myResourceGroup --query id -o tsv)
az role assignment create \ --role "Virtual Machine Administrator Login" \ --assignee $username \
- --scope $vm
+ --scope $rg
``` > [!NOTE]
active-directory Active Directory Compare Azure Ad To Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-compare-azure-ad-to-ad.md
Most IT administrators are familiar with Active Directory Domain Services concep
| Admin management|Organizations will use a combination of domains, organizational units, and groups in AD to delegate administrative rights to manage the directory and resources it controls.| Azure AD provides [built-in roles](./active-directory-users-assign-role-azure-portal.md) with its Azure AD role-based access control (Azure AD RBAC) system, with limited support for [creating custom roles](../roles/custom-overview.md) to delegate privileged access to the identity system, the apps, and resources it controls.</br>Managing roles can be enhanced with [Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md) to provide just-in-time, time-restricted, or workflow-based access to privileged roles. | | Credential management| Credentials in Active Directory are based on passwords, certificate authentication, and smartcard authentication. Passwords are managed using password policies that are based on password length, expiry, and complexity.|Azure AD uses intelligent [password protection](../authentication/concept-password-ban-bad.md) for cloud and on-premises. Protection includes smart lockout plus blocking common and custom password phrases and substitutions. </br>Azure AD significantly boosts security [through Multi-factor authentication](../authentication/concept-mfa-howitworks.md) and [passwordless](../authentication/concept-authentication-passwordless.md) technologies, like FIDO2. </br>Azure AD reduces support costs by providing users a [self-service password reset](../authentication/concept-sspr-howitworks.md) system. | | **Apps**|||
-| Infrastructure apps|Active Directory forms the basis for many infrastructure on-premises components, for example, DNS, DHCP, IPSec, WiFi, NPS, and VPN access|In a new cloud world, Azure AD, is the new control plane for accessing apps versus relying on networking controls. When users authenticate[, Conditional access (CA)](../conditional-access/overview.md), will control which users, will have access to which apps under required conditions.|
+| Infrastructure apps|Active Directory forms the basis for many infrastructure on-premises components, for example, DNS, DHCP, IPSec, WiFi, NPS, and VPN access|In a new cloud world, Azure AD, is the new control plane for accessing apps versus relying on networking controls. When users authenticate, [Conditional access (CA)](../conditional-access/overview.md) controls which users have access to which apps under required conditions.|
| Traditional and legacy apps| Most on-premises apps use LDAP, Windows-Integrated Authentication (NTLM and Kerberos), or Header-based authentication to control access to users.| Azure AD can provide access to these types of on-premises apps using [Azure AD application proxy](../app-proxy/application-proxy.md) agents running on-premises. Using this method Azure AD can authenticate Active Directory users on-premises using Kerberos while you migrate or need to coexist with legacy apps. | | SaaS apps|Active Directory doesn't support SaaS apps natively and requires federation system, such as AD FS.|SaaS apps supporting OAuth2, SAML, and WS-\* authentication can be integrated to use Azure AD for authentication. | | Line of business (LOB) apps with modern authentication|Organizations can use AD FS with Active Directory to support LOB apps requiring modern authentication.| LOB apps requiring modern authentication can be configured to use Azure AD for authentication. |
active-directory Five Steps To Full Application Integration With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/five-steps-to-full-application-integration-with-azure-ad.md
We have published guidance for managing the business process of integrating apps
A good place to start is by evaluating your use of Active Directory Federation Services (ADFS). Many organizations use ADFS for authentication with SaaS apps, custom Line-of-Business apps, and Microsoft 365 and Azure AD-based apps:
-![Diagram shows on-premises apps, line of business apps, SaaS apps, and, via Azure AD, Office 365 all connecting with dotted lines into Active Directory and AD FS.](\media\five-steps-to-full-application-integration-with-azure-ad\adfs-integration-1.png)
+![Diagram shows on-premises apps, line of business apps, SaaS apps, and, via Azure AD, Office 365 all connecting with dotted lines into Active Directory and AD FS.](./media/five-steps-to-full-application-integration-with-azure-ad/adfs-integration-1.png)
You can upgrade this configuration by [replacing ADFS with Azure AD as the center](../manage-apps/migrate-adfs-apps-to-azure.md) of your identity management solution. Doing so enables sign-on for every app your employees want to access, and makes it easy for employees to find any business application they need via the [MyApps portal](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510), in addition to the other benefits mentioned above.
-![Diagram shows on-premises apps via Active Directory and AD FS, line of business apps, SaaS apps, and Office 365 all connecting with dotted lines into Azure Active Directory.](\media\five-steps-to-full-application-integration-with-azure-ad\adfs-integration-2.png)
+![Diagram shows on-premises apps via Active Directory and AD FS, line of business apps, SaaS apps, and Office 365 all connecting with dotted lines into Azure Active Directory.](./media/five-steps-to-full-application-integration-with-azure-ad/adfs-integration-2.png)
Once Azure AD becomes the central identity provider, you may be able to switch from ADFS completely, rather than using a federated solution. Apps that previously used ADFS for authentication can now use Azure AD alone.
-![Diagram shows on-premises, line of business apps, SaaS apps, and Office 365 all connecting with dotted lines into Azure Active Directory. Active Directory and AD FS is not present.](\media\five-steps-to-full-application-integration-with-azure-ad\adfs-integration-3.png)
+![Diagram shows on-premises, line of business apps, SaaS apps, and Office 365 all connecting with dotted lines into Azure Active Directory. Active Directory and AD FS is not present.](./media/five-steps-to-full-application-integration-with-azure-ad/adfs-integration-3.png)
You can also migrate apps that use a different cloud-based identity provider to Azure AD. Your organization may have multiple Identity Access Management (IAM) solutions in place. Migrating to one Azure AD infrastructure is an opportunity to reduce dependencies on IAM licenses (on-premises or in the cloud) and infrastructure costs. In cases where you may have already paid for Azure AD via M365 licenses, there is no reason to pay the added cost of another IAM solution.
active-directory Tutorial Vm Managed Identities Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos.md
New-AzVm `
```
-The user assigned managed identity should be specified using its [resourceID](how-manage-user-assigned-managed-identities.md
-).
+The user assigned managed identity should be specified using its [resourceID](./how-manage-user-assigned-managed-identities.md).
# [Azure CLI](#tab/azure-cli)
active-directory Administrative Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/administrative-units.md
Previously updated : 06/21/2022 Last updated : 06/23/2022
The following sections describe current support for administrative unit scenario
| Permissions | Microsoft Graph/PowerShell | Azure portal | Microsoft 365 admin center | | | :: | :: | :: | | Create or delete administrative units | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Add or remove members individually | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Add or remove members in bulk | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| Add or remove members | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Assign administrative unit-scoped administrators | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Add or remove users or devices dynamically based on rules (Preview) | :heavy_check_mark: | :heavy_check_mark: | :x: | | Add or remove groups dynamically based on rules | :x: | :x: | :x: |
active-directory Bgsonline Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bgsonline-tutorial.md
To configure Azure AD single sign-on with BGS Online, perform the following step
For test environment, use this pattern `https://millwardbrown.marketingtracker.nl/mt5/sso/saml/AssertionConsumerService.aspx` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [BGS Online support team](mailTo:bgsdashboardteam@millwardbrown.com) to get these values.
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [BGS Online support team](mailto:bgsdashboardteam@millwardbrown.com) to get these values.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
active-directory Tableau Online Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tableau-online-provisioning-tutorial.md
Before you configure and enable automatic user provisioning, decide which users
This section guides you through the steps to configure the Azure AD provisioning service. Use it to create, update, and disable users or groups in Tableau Online based on user or group assignments in Azure AD. > [!TIP]
-> You also can enable SAML-based single sign-on for Tableau Online. Follow the instructions in the [Tableau Online single sign-on tutorial](tableauonline-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, although these two features complement each other.
+> You also can enable SAML-based Single Sign-On for Tableau Online. Follow the instructions in the [Tableau Online single sign-on tutorial](tableauonline-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, although these two features complement each other.
### Configure automatic user provisioning for Tableau Online in Azure AD
You can use the **Synchronization Details** section to monitor progress and foll
For information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md).
+## Update a Tableau Cloud application to use the Tableau Cloud SCIM 2.0 endpoint
+In June 2022, Tableau released a SCIM 2.0 connector. Completing the steps below will update applications configured to use the Tableau API endpoint to the use the SCIM 2.0 endpoint. These steps will remove any customizations previously made to the Tableau Cloud application, including:
+* Authentication details
+* Scoping filters
+* Custom attribute mappings
+
+> [!NOTE]
+> Be sure to note any changes that have been made to the settings listed above before completing the steps below. Failure to do so will result in the loss of customized settings.
+
+1. Sign into the Azure portal at https://portal.azure.com
+2. Navigate to your current Tableau Cloud app under Azure Active Directory > Enterprise Applications
+3. In the Properties section of your new custom app, copy the Object ID.
+
+ ![Screenshot of Tableau Cloud app in the Azure portal.](./media/tableau-online-provisioning-tutorial/tableau-cloud-properties.png)
+
+4. In a new web browser window, go to https://developer.microsoft.com/graph/graph-explorer and sign in as the administrator for the Azure AD tenant where your app is added.
+
+ ![Screenshot of Microsoft Graph explorer sign in page.](./media/workplace-by-facebook-provisioning-tutorial/permissions.png)
+
+5. Check to make sure the account being used has the correct permissions. The permission ΓÇ£Directory.ReadWrite.AllΓÇ¥ is required to make this change.
+
+ ![Screenshot of Microsoft Graph settings option.](./media/workplace-by-facebook-provisioning-tutorial/permissions-2.png)
+
+ ![Screenshot of Microsoft Graph permissions.](./media/workplace-by-facebook-provisioning-tutorial/permissions-3.png)
+
+6. Using the ObjectID selected from the app previously, run the following command:
+
+```
+GET https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs/
+```
+
+7. Taking the "id" value from the response body of the GET request from above, run the command below, replacing "[job-id]" with the id value from the GET request. The value should have the format of "Tableau.xxxxxxxxxxxxxxx.xxxxxxxxxxxxxxx":
+```
+DELETE https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs/[job-id]
+```
+8. In the Graph Explorer, run the command below. Replace "[object-id]" with the service principal ID (object ID) copied from the third step.
+```
+POST https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs { "templateId": "TableauOnlineSCIM" }
+```
+
+![Screenshot of Microsoft Graph request.](./media/tableau-online-provisioning-tutorial/tableau-cloud-graph.png)
+
+9. Return to the first web browser window and select the Provisioning tab for your application. Your configuration will have been reset. You can confirm the upgrade has taken place by confirming the Job ID starts with ΓÇ£TableauOnlineSCIMΓÇ¥.
+
+10. Under the Admin Credentials section, select "Bearer Authentication" as the authentication method and enter the Tenant URL and Secret Token of the Tableau instance you wish to provision to.
+![Screenshot of Admin Credentials in Tableau Cloud in the Azure portal.](./media/tableau-online-provisioning-tutorial/tableau-cloud-creds.png)
+
+11. Restore any previous changes you made to the application (Authentication details, Scoping filters, Custom attribute mappings) and re-enable provisioning.
+
+> [!NOTE]
+> Failure to restore the previous settings may results in attributes (name.formatted for example) updating in Workplace unexpectedly. Be sure to check the configuration before enabling provisioning
+ ## Change log * 09/30/2020 - Added support for attribute "authSetting" for Users.
For information on how to read the Azure AD provisioning logs, see [Reporting on
<!--Image references--> [1]: ./media/tableau-online-provisioning-tutorial/tutorial_general_01.png [2]: ./media/tableau-online-provisioning-tutorial/tutorial_general_02.png
-[3]: ./media/tableau-online-provisioning-tutorial/tutorial_general_03.png
+[3]: ./media/tableau-online-provisioning-tutorial/tutorial_general_03.png
active-directory Voyance Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/voyance-tutorial.md
In this section, you enable Britta Simon to use Azure single sign-on by granting
In this section, a user called Britta Simon is created in Voyance. Voyance supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Voyance, a new one is created after authentication. >[!NOTE]
->If you need to create a user manually, you need to contact [Voyance support team](maiLto:support@nyansa.com).
+>If you need to create a user manually, you need to contact [Voyance support team](mailto:support@nyansa.com).
### Test single sign-on
active-directory Wizergosproductivitysoftware Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wizergosproductivitysoftware-tutorial.md
To configure Azure AD single sign-on with Wizergos Productivity Software, perfor
a. Click **UPLOAD** button to upload the downloaded certificate from Azure AD.
- b. In the **Issuer URL** textbox, paste the **Azure AD Identifier** value which you have copied from Azure portal.
+ b. In the **Issuer URL** textbox, paste the **Azure AD Identifier** value that you copied from the Azure portal.
- c. In the **Single Sign-On URL** textbox, paste the **Login URL** value which you have copied from Azure portal.
+ c. In the **Single Sign-On URL** textbox, paste the **Login URL** value that you copied from the Azure portal.
- d. In the **Single Sign-Out URL** textbox, paste the **Logout URL** value which you have copied from Azure portal.
+ d. In the **Single Sign-Out URL** textbox, paste the **Logout URL** value that you copied from Azure portal.
e. Click **Save** button.
In this section, you enable Britta Simon to use Azure single sign-on by granting
### Create Wizergos Productivity Software test user
-In this section, you create a user called Britta Simon in Wizergos Productivity Software. Work with [Wizergos Productivity Software support team](mailTo:support@wizergos.com) to add the users in the Wizergos Productivity Software platform.
+In this section, you create a user called Britta Simon in Wizergos Productivity Software. Work with [Wizergos Productivity Software support team](mailto:support@wizergos.com) to add the users in the Wizergos Productivity Software platform.
### Test single sign-on
active-directory Introduction To Verifiable Credentials Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/introduction-to-verifiable-credentials-architecture.md
> [!IMPORTANT] > Azure Active Directory Verifiable Credentials is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-ItΓÇÖs important to plan your verifiable credential solution so that in addition to issuing and or validating credentials, you have a complete view of the architectural and business impacts of your solution. If you havenΓÇÖt reviewed them already, we recommend you review [Introduction to Azure Active Directory Verifiable Credentials](decentralized-identifier-overview.md) and the[ FAQs](verifiable-credentials-faq.md), and then complete the [Getting Started](get-started-verifiable-credentials.md) tutorial.
+ItΓÇÖs important to plan your verifiable credential solution so that in addition to issuing and or validating credentials, you have a complete view of the architectural and business impacts of your solution. If you havenΓÇÖt reviewed them already, we recommend you review [Introduction to Azure Active Directory Verifiable Credentials](decentralized-identifier-overview.md) and the [FAQs](verifiable-credentials-faq.md), and then complete the [Getting Started](get-started-verifiable-credentials.md) tutorial.
This architectural overview introduces the capabilities and components of the Azure Active Directory Verifiable Credentials service. For more detailed information on issuance and validation, see
aks Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files.md
For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Se
[aks-nfs]: azure-nfs-volume.md [anf]: ../azure-netapp-files/azure-netapp-files-introduction.md [anf-delegate-subnet]: ../azure-netapp-files/azure-netapp-files-delegate-subnet.md
-[anf-quickstart]: ../azure-netapp-files/
[anf-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=all [anf-waitlist]: https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR8cq17Xv9yVBtRCSlcD_gdVUNUpUWEpLNERIM1NOVzA5MzczQ0dQR1ZTSS4u [az-aks-show]: /cli/azure/aks#az_aks_show
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
[taints-tolerations]: operator-best-practices-advanced-scheduler.md#provide-dedicated-nodes-using-taints-and-tolerations [vm-sizes]: ../virtual-machines/sizes.md [use-system-pool]: use-system-pools.md
-[ip-limitations]: ../virtual-network/virtual-network-ip-addresses-overview-arm#standard
[node-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks [vmss-commands]: ../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md#public-ipv4-per-virtual-machine [az-list-ips]: /cli/azure/vmss#az_vmss_list_instance_public_ips
api-management Api Management Troubleshoot Cannot Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-troubleshoot-cannot-add-custom-domain.md
The API Management service does not have permission to access the key vault that
To resolve this issue, follow these steps:
-1. Go to the [Azure portal](Https://portal.azure.com), select your API Management instance, and then select **Managed identities**. Make sure that the **Register with Azure Active Directory** option is set to **Yes**.
+1. Go to the [Azure portal](https://portal.azure.com), select your API Management instance, and then select **Managed identities**. Make sure that the **Register with Azure Active Directory** option is set to **Yes**.
![Registering with Azure Active Director](./media/api-management-troubleshoot-cannot-add-custom-domain/register-with-aad.png) 1. In the Azure portal, open the **Key vaults** service, and select the key vault that you're trying to use for the custom domain. 1. Select **Access policies**, and check whether there is a service principal that matches the name of the API Management service instance. If there is, select the service principal, and make sure that it has the **Get** permission listed under **Secret permissions**.
api-management Compute Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/compute-infrastructure.md
To find the `platformVersion` property in the portal:
1. In **API version**, select a current version such as `2021-08-01` or later. 1. In the JSON view, scroll down to find the `platformVersion` property.
- :::image type="content" source="media/compute-infrastructure/platformversion property.png" alt-text="platformVersion property in JSON view":::
+ :::image type="content" source="media/compute-infrastructure/platformversion-property.png" alt-text="platformVersion property in JSON view":::
## How do I migrate to the `stv2` platform?
api-management Websocket Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/websocket-api.md
Below are the current restrictions of WebSocket support in API Management:
* Azure CLI, PowerShell, and SDK currently do not support management operations of WebSocket APIs. * 200 active connections limit per unit. * Websockets APIs support the following valid buffer types for messages: Close, BinaryFragment, BinaryMessage, UTF8Fragment, and UTF8Message.
+* Currently, the set header policy doesn't support changing certain well-known headers, including `Host` headers, in onHandshake requests.
### Unsupported policies
application-gateway How To Troubleshoot Application Gateway Session Affinity Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/how-to-troubleshoot-application-gateway-session-affinity-issues.md
Web debugging tools like Fiddler, can help you debug web applications by capturi
Use the web debugger of your choice. In this sample we will use Fiddler to capture and analyze http or https traffics, follow the instructions:
-1. Download the Fiddler tool at <https://www.telerik.com/download/fiddler>.
+1. Download [Fiddler](https://www.telerik.com/download/fiddler).
> [!NOTE] > Choose Fiddler4 if the capturing computer has .NET 4 installed. Otherwise, choose Fiddler2. 2. Right click the setup executable, and run as administrator to install.
- ![Screenshot shows the Fiddler tool setup program with a contextual menu with Run as administrator selected.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-12.png)
+ ![Screenshot shows the Fiddler setup program with a contextual menu with Run as administrator selected.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-12.png)
3. When you open Fiddler, it should automatically start capturing traffic (notice the Capturing at lower-left-hand corner). Press F12 to start or stop traffic capture.
application-gateway Tutorial Url Route Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-route-powershell.md
At this point, you have an application gateway that listens for traffic on port
### Add image and video backend pools and port
-Add backend pools named *imagesBackendPool* and *videoBackendPool* to your application gateway[Add-AzApplicationGatewayBackendAddressPool](/powershell/module/az.network/add-azapplicationgatewaybackendaddresspool). Add the frontend port for the pools using [Add-AzApplicationGatewayFrontendPort](/powershell/module/az.network/add-azapplicationgatewayfrontendport). Submit the changes to the application gateway using [Set-AzApplicationGateway](/powershell/module/az.network/set-azapplicationgateway).
+Add backend pools named *imagesBackendPool* and *videoBackendPool* to your application gateway using [Add-AzApplicationGatewayBackendAddressPool](/powershell/module/az.network/add-azapplicationgatewaybackendaddresspool). Add the frontend port for the pools using [Add-AzApplicationGatewayFrontendPort](/powershell/module/az.network/add-azapplicationgatewayfrontendport). Submit the changes to the application gateway using [Set-AzApplicationGateway](/powershell/module/az.network/set-azapplicationgateway).
```azurepowershell-interactive $appgw = Get-AzApplicationGateway `
applied-ai-services Applied Ai Services Customer Spotlight Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/applied-ai-services-customer-spotlight-use-cases.md
Title: Customer spotlight on use cases
-description: Customer spotlight on use cases
+description: Learn about customers who have used Azure Applied AI Services. See use cases that improve customer experience, streamline processes, and protect data.
- Previously updated : 05/13/2021+ Last updated : 06/02/2022 + # Customer spotlight on use cases
-Customers are already using Applied AI Services to add AI horsepower to their business scenarios.
+Customers are using Azure Applied AI Services to add AI horsepower to their business scenarios:
+
+- Chatbots improve customer experience in a robust, expandable way.
+- AI-driven search offers strong data security and delivers smart results that add value.
+- Azure Form Recognizer increases ROI by using automation to streamline data extraction.
| Partner | Description | Customer story | ||-|-|
-| <center>![Progressive_Logo](./media/logo-progressive-02.png) | **Progressive helps customers make smarter insurance decisions with Bot Service and Cognitive Search.** <br>"One of the great things about Bot Service is that, out of the box, we could use it to quickly put together the basic framework for our bot." *-Matt White, Marketing Manager, Personal Lines Acquisition Experience, Progressive Insurance* | [Read the story](https://customers.microsoft.com/story/789698-progressive-insurance-cognitive-services-insurance) |
-| <center>![Wix Logo](./media/wix-logo-01.png) | **WIX deploys smart search across 150 million websites with Cognitive Search** <br> ΓÇ£We really benefitted from choosing Azure Cognitive Search because we could go to market faster than we had with other products. We donΓÇÖt have to manage infrastructure, and our developers can spend time on higher-value tasks.ΓÇ¥*-Giedrius Gra┼╛evi─ìius: Project Manager for Search, Wix* | [Read the story](https://customers.microsoft.com/story/764974-wix-partner-professional-services-azure-cognitive-search) |
-| <center>![Chevron logo](./media/chevron-01.png) | **Chevron uses Form Recognizer to extract volumes of data from unstructured reports**<br>ΓÇ£We only have a finite amount of time to extract data, and oftentimes the data thatΓÇÖs left behind is valuable. With this new technology, we're able to extract everything and then decide what we can use to improve our performance.ΓÇ¥*-Diane Cillis, Engineering Technologist, Chevron Canada* | [Read the story](https://customers.microsoft.com/story/chevron-mining-oil-gas-azure-cognitive-services) |
-
+| <center>![Logo of Progressive Insurance, which consists of the word progressive in a slanted font in blue, capital letters.](./media/logo-progressive-02.png) | **Progressive uses Azure Bot Service and Azure Cognitive Search to help customers make smarter insurance decisions.** <br>"One of the great things about Bot Service is that, out of the box, we could use it to quickly put together the basic framework for our bot." *-Matt White, Marketing Manager, Personal Lines Acquisition Experience, Progressive Insurance* | [Insurance shoppers gain new service channel with artificial intelligence chatbot](https://customers.microsoft.com/story/789698-progressive-insurance-cognitive-services-insurance) |
+| <center>![Logo of Wix, which consists of the name Wix in a dary-gray font in lowercase letters.](./media/wix-logo-01.png) | **WIX uses Cognitive Search to deploy smart search across 150 million websites.** <br> "We really benefitted from choosing Azure Cognitive Search because we could go to market faster than we had with other products. We don't have to manage infrastructure, and our developers can spend time on higher-value tasks."*-Giedrius Gra┼╛evi─ìius: Project Manager for Search, Wix* | [Wix deploys smart, scalable search across 150 million websites with Azure Cognitive Search](https://customers.microsoft.com/story/764974-wix-partner-professional-services-azure-cognitive-search) |
+| <center>![Logo of Chevron. The name Chevron appears above two vertically stacked chevrons that point downward. The top one is blue, and the lower one is red.](./media/chevron-01.png) | **Chevron uses Form Recognizer to extract volumes of data from unstructured reports.**<br>"We only have a finite amount of time to extract data, and oftentimes the data that's left behind is valuable. With this new technology, we're able to extract everything and then decide what we can use to improve our performance."*-Diane Cillis, Engineering Technologist, Chevron Canada* | [Chevron is using AI-powered robotic process automation to extract volumes of data from unstructured reports for analysis](https://customers.microsoft.com/story/chevron-mining-oil-gas-azure-cognitive-services) |
## See also
-* [What are Applied AI Services?](what-are-applied-ai-services.md)
-* [Why use Applied AI Services?](why-applied-ai-services.md)
+
+- [What are Applied AI Services?](what-are-applied-ai-services.md)
+- [Why use Applied AI Services?](why-applied-ai-services.md)
ΓÇïΓÇï
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
This section helps you decide which Form Recognizer v3.0 supported feature you s
|<ul><li>**General structured document**</li></yl>| Is your document mostly structured and does it contain a few fields and values that may not be covered by the other prebuilt models?|<ul><li>If **Yes**, use the [**General document (preview)**](concept-general-document.md) model.</li><li> If **No**, because the fields and values are complex and highly variable, train and build a [**Custom**](how-to-guides/build-custom-model-v3.md) model.</li></ul> |<ul><li>**Invoice**</li></yl>| Is your invoice document composed in a [supported language](language-support.md#invoice-model) text?|<ul><li>If **Yes**, use the [**Invoice**](concept-invoice.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul> |<ul><li>**Receipt**</li><li>**Business card**</li></ul>| Is your receipt or business card document composed in English text? | <ul><li>If **Yes**, use the [**Receipt**](concept-receipt.md) or [**Business Card**](concept-business-card.md) model.</li><li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>|
-|<ul><li>**ID document**</li></ul>| Is your ID document a US driver's license or an international passport?| <ul><li>If **Yes**, use the [**ID document**](concept-id-document.md) model.</li><li>If **No**, use the[**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model</li></ul>|
+|<ul><li>**ID document**</li></ul>| Is your ID document a US driver's license or an international passport?| <ul><li>If **Yes**, use the [**ID document**](concept-id-document.md) model.</li><li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model</li></ul>|
|<ul><li>**Form** or **Document**</li></ul>| Is your form or document an industry-standard format commonly used in your business or industry?| <ul><li>If **Yes**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md).</li><li>If **No**, you can [**Train and build a custom model**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model). ## Form Recognizer features and development options
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
Train a custom model to analyze and extract data from forms and documents specif
1. Start by creating a new CORS entry in the Blob service.
- 1. Set the **Allowed origins** to **<https://fott-2-1.azurewebsites.net>**.
+ 1. Set the **Allowed origins** to `https://fott-2-1.azurewebsites.net`.
:::image type="content" source="../media/quickstarts/storage-cors-example.png" alt-text="Screenshot that shows CORS configuration for a storage account.":::
applied-ai-services Try V3 Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-csharp-sdk.md
Previously updated : 06/13/2022 Last updated : 06/22/2022 recommendations: false
DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), cr
//sample form document
-Uri fileUri = new Uri ("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf");
+Uri fileUri = new Uri("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf");
AnalyzeDocumentOperation operation = await client.StartAnalyzeDocumentFromUriAsync("prebuilt-document", fileUri);
foreach (DocumentKeyValuePair kvp in result.KeyValuePairs)
} }
-Console.WriteLine("Detected entities:");
-
-foreach (DocumentEntity entity in result.Entities)
-{
- if (entity.SubCategory == null)
- {
- Console.WriteLine($" Found entity '{entity.Content}' with category '{entity.Category}'.");
- }
- else
- {
- Console.WriteLine($" Found entity '{entity.Content}' with category '{entity.Category}' and sub-category '{entity.SubCategory}'.");
- }
-}
- foreach (DocumentPage page in result.Pages) { Console.WriteLine($"Document Page {page.PageNumber} has {page.Lines.Count} line(s), {page.Words.Count} word(s),");
foreach (DocumentPage page in result.Pages)
Console.WriteLine($" Line {i} has content: '{line.Content}'."); Console.WriteLine($" Its bounding box is:");
- Console.WriteLine($" Upper left => X: {line.BoundingBox[0].X}, Y= {line.BoundingBox[0].Y}");
- Console.WriteLine($" Upper right => X: {line.BoundingBox[1].X}, Y= {line.BoundingBox[1].Y}");
- Console.WriteLine($" Lower right => X: {line.BoundingBox[2].X}, Y= {line.BoundingBox[2].Y}");
- Console.WriteLine($" Lower left => X: {line.BoundingBox[3].X}, Y= {line.BoundingBox[3].Y}");
+ Console.WriteLine($" Upper left => X: {line.BoundingPolygon[0].X}, Y= {line.BoundingPolygon[0].Y}");
+ Console.WriteLine($" Upper right => X: {line.BoundingPolygon[1].X}, Y= {line.BoundingPolygon[1].Y}");
+ Console.WriteLine($" Lower right => X: {line.BoundingPolygon[2].X}, Y= {line.BoundingPolygon[2].Y}");
+ Console.WriteLine($" Lower left => X: {line.BoundingPolygon[3].X}, Y= {line.BoundingPolygon[3].Y}");
} for (int i = 0; i < page.SelectionMarks.Count; i++)
foreach (DocumentPage page in result.Pages)
Console.WriteLine($" Selection Mark {i} is {selectionMark.State}."); Console.WriteLine($" Its bounding box is:");
- Console.WriteLine($" Upper left => X: {selectionMark.BoundingBox[0].X}, Y= {selectionMark.BoundingBox[0].Y}");
- Console.WriteLine($" Upper right => X: {selectionMark.BoundingBox[1].X}, Y= {selectionMark.BoundingBox[1].Y}");
- Console.WriteLine($" Lower right => X: {selectionMark.BoundingBox[2].X}, Y= {selectionMark.BoundingBox[2].Y}");
- Console.WriteLine($" Lower left => X: {selectionMark.BoundingBox[3].X}, Y= {selectionMark.BoundingBox[3].Y}");
+ Console.WriteLine($" Upper left => X: {selectionMark.BoundingPolygon[0].X}, Y= {selectionMark.BoundingPolygon[0].Y}");
+ Console.WriteLine($" Upper right => X: {selectionMark.BoundingPolygon[1].X}, Y= {selectionMark.BoundingPolygon[1].Y}");
+ Console.WriteLine($" Lower right => X: {selectionMark.BoundingPolygon[2].X}, Y= {selectionMark.BoundingPolygon[2].Y}");
+ Console.WriteLine($" Lower left => X: {selectionMark.BoundingPolygon[3].X}, Y= {selectionMark.BoundingPolygon[3].Y}");
} }
AzureKeyCredential credential = new AzureKeyCredential(key);
DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential); //sample document
+// sample form document
Uri fileUri = new Uri ("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"); AnalyzeDocumentOperation operation = await client.StartAnalyzeDocumentFromUriAsync("prebuilt-layout", fileUri);
foreach (DocumentPage page in result.Pages)
Console.WriteLine($" Line {i} has content: '{line.Content}'."); Console.WriteLine($" Its bounding box is:");
- Console.WriteLine($" Upper left => X: {line.BoundingBox[0].X}, Y= {line.BoundingBox[0].Y}");
- Console.WriteLine($" Upper right => X: {line.BoundingBox[1].X}, Y= {line.BoundingBox[1].Y}");
- Console.WriteLine($" Lower right => X: {line.BoundingBox[2].X}, Y= {line.BoundingBox[2].Y}");
- Console.WriteLine($" Lower left => X: {line.BoundingBox[3].X}, Y= {line.BoundingBox[3].Y}");
+ Console.WriteLine($" Upper left => X: {line.BoundingPolygon[0].X}, Y= {line.BoundingPolygon[0].Y}");
+ Console.WriteLine($" Upper right => X: {line.BoundingPolygon[1].X}, Y= {line.BoundingPolygon[1].Y}");
+ Console.WriteLine($" Lower right => X: {line.BoundingPolygon[2].X}, Y= {line.BoundingPolygon[2].Y}");
+ Console.WriteLine($" Lower left => X: {line.BoundingPolygon[3].X}, Y= {line.BoundingPolygon[3].Y}");
} for (int i = 0; i < page.SelectionMarks.Count; i++)
foreach (DocumentPage page in result.Pages)
Console.WriteLine($" Selection Mark {i} is {selectionMark.State}."); Console.WriteLine($" Its bounding box is:");
- Console.WriteLine($" Upper left => X: {selectionMark.BoundingBox[0].X}, Y= {selectionMark.BoundingBox[0].Y}");
- Console.WriteLine($" Upper right => X: {selectionMark.BoundingBox[1].X}, Y= {selectionMark.BoundingBox[1].Y}");
- Console.WriteLine($" Lower right => X: {selectionMark.BoundingBox[2].X}, Y= {selectionMark.BoundingBox[2].Y}");
- Console.WriteLine($" Lower left => X: {selectionMark.BoundingBox[3].X}, Y= {selectionMark.BoundingBox[3].Y}");
+ Console.WriteLine($" Upper left => X: {selectionMark.BoundingPolygon[0].X}, Y= {selectionMark.BoundingPolygon[0].Y}");
+ Console.WriteLine($" Upper right => X: {selectionMark.BoundingPolygon[1].X}, Y= {selectionMark.BoundingPolygon[1].Y}");
+ Console.WriteLine($" Lower right => X: {selectionMark.BoundingPolygon[2].X}, Y= {selectionMark.BoundingPolygon[2].Y}");
+ Console.WriteLine($" Lower left => X: {selectionMark.BoundingPolygon[3].X}, Y= {selectionMark.BoundingPolygon[3].Y}");
} }
applied-ai-services Try V3 Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-form-recognizer-studio.md
Prebuilt models help you add Form Recognizer features to your apps without havin
* [**ID document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument): extract text and key information from driver licenses and international passports. * [**Business card**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard): extract text and key information from business cards.
-After you've completed the prerequisites, navigate to [Form Recognizer Studio General Documents](https://formrecognizer.appliedai.azure.com).
+After you've completed the prerequisites, navigate to [Form Recognizer Studio General Documents](https://formrecognizer.appliedai.azure.com/studio/document).
In the following example, we use the General Documents feature. The steps to use other pre-trained features like [W2 tax form](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2), [Read](https://formrecognizer.appliedai.azure.com/studio/read), [Layout](https://formrecognizer.appliedai.azure.com/studio/layout), [Invoice](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice), [Receipt](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt), [Business card](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard), and [ID documents](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument) models are similar.
A **standard performance** [**Azure Blob Storage account**](https://portal.azure
1. Start by creating a new CORS entry in the Blob service.
-1. Set the **Allowed origins** to **<https://formrecognizer.appliedai.azure.com>**.
+1. Set the **Allowed origins** to `https://formrecognizer.appliedai.azure.com`.
:::image type="content" source="../media/quickstarts/cors-updated-image.png" alt-text="Screenshot that shows CORS configuration for a storage account.":::
applied-ai-services Try V3 Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-java-sdk.md
Previously updated : 06/13/2022 Last updated : 06/22/2022 recommendations: false
In this quickstart you'll use following features to analyze and extract data and
```console mkdir form-recognizer-app && form-recognizer-app ```
-
+ ```powershell mkdir translator-text-app; cd translator-text-app ```
Extract text, tables, structure, key-value pairs, and named entities from docume
public static void main(String[] args) {
- // create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
- DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
- .credential(new AzureKeyCredential(key))
- .endpoint(endpoint)
- .buildClient();
-
- // sample document
- String documentUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf";
- String modelId = "prebuilt-document";
- SyncPoller < DocumentOperationResult, AnalyzeResult> analyzeDocumentPoller =
- client.beginAnalyzeDocumentFromUrl(modelId, documentUrl);
-
- AnalyzeResult analyzeResult = analyzeDocumentPoller.getFinalResult();
-
- // pages
- analyzeResult.getPages().forEach(documentPage -> {
- System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
- documentPage.getWidth(),
- documentPage.getHeight(),
- documentPage.getUnit());
-
- // lines
- documentPage.getLines().forEach(documentLine ->
- System.out.printf("Line %s is within a bounding box %s.%n",
- documentLine.getContent(),
- documentLine.getBoundingBox().toString()));
-
- // words
- documentPage.getWords().forEach(documentWord ->
- System.out.printf("Word %s has a confidence score of %.2f%n.",
- documentWord.getContent(),
- documentWord.getConfidence()));
- });
-
- // tables
- List <DocumentTable> tables = analyzeResult.getTables();
- for (int i = 0; i < tables.size(); i++) {
- DocumentTable documentTable = tables.get(i);
- System.out.printf("Table %d has %d rows and %d columns.%n", i, documentTable.getRowCount(),
- documentTable.getColumnCount());
- documentTable.getCells().forEach(documentTableCell -> {
- System.out.printf("Cell '%s', has row index %d and column index %d.%n",
- documentTableCell.getContent(),
- documentTableCell.getRowIndex(), documentTableCell.getColumnIndex());
- });
- System.out.println();
- }
-
- // Entities
- analyzeResult.getEntities().forEach(documentEntity -> {
- System.out.printf("Entity category : %s, sub-category %s%n: ",
- documentEntity.getCategory(), documentEntity.getSubCategory());
- System.out.printf("Entity content: %s%n: ", documentEntity.getContent());
- System.out.printf("Entity confidence: %.2f%n", documentEntity.getConfidence());
- });
-
- // Key-value pairs
- analyzeResult.getKeyValuePairs().forEach(documentKeyValuePair -> {
- System.out.printf("Key content: %s%n", documentKeyValuePair.getKey().getContent());
- System.out.printf("Key content bounding region: %s%n",
- documentKeyValuePair.getKey().getBoundingRegions().toString());
-
- if (documentKeyValuePair.getValue() != null) {
- System.out.printf("Value content: %s%n", documentKeyValuePair.getValue().getContent());
- System.out.printf("Value content bounding region: %s%n", documentKeyValuePair.getValue().getBoundingRegions().toString());
- }
+ // create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+ DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
+ .credential(new AzureKeyCredential(key))
+ .endpoint(endpoint)
+ .buildClient();
+
+ // sample document
+ String documentUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf";
+ String modelId = "prebuilt-document";
+ SyncPoller < DocumentOperationResult, AnalyzeResult> analyzeDocumentPoller =
+ client.beginAnalyzeDocumentFromUrl(modelId, documentUrl);
+
+ AnalyzeResult analyzeResult = analyzeDocumentPoller.getFinalResult();
+
+ // pages
+ analyzeResult.getPages().forEach(documentPage -> {
+ System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
+ documentPage.getWidth(),
+ documentPage.getHeight(),
+ documentPage.getUnit());
+
+ // lines
+ documentPage.getLines().forEach(documentLine ->
+ System.out.printf("Line %s is within a bounding polygon %s.%n",
+ documentLine.getContent(),
+ documentLine.getBoundingPolygon().toString()));
+
+ // words
+ documentPage.getWords().forEach(documentWord ->
+ System.out.printf("Word %s has a confidence score of %.2f%n.",
+ documentWord.getContent(),
+ documentWord.getConfidence()));
+ });
+
+ // tables
+ List <DocumentTable> tables = analyzeResult.getTables();
+ for (int i = 0; i < tables.size(); i++) {
+ DocumentTable documentTable = tables.get(i);
+ System.out.printf("Table %d has %d rows and %d columns.%n", i, documentTable.getRowCount(),
+ documentTable.getColumnCount());
+ documentTable.getCells().forEach(documentTableCell -> {
+ System.out.printf("Cell '%s', has row index %d and column index %d.%n",
+ documentTableCell.getContent(),
+ documentTableCell.getRowIndex(), documentTableCell.getColumnIndex());
});
+ System.out.println();
}+
+ // Key-value pairs
+ analyzeResult.getKeyValuePairs().forEach(documentKeyValuePair -> {
+ System.out.printf("Key content: %s%n", documentKeyValuePair.getKey().getContent());
+ System.out.printf("Key content bounding region: %s%n",
+ documentKeyValuePair.getKey().getBoundingRegions().toString());
+
+ if (documentKeyValuePair.getValue() != null) {
+ System.out.printf("Value content: %s%n", documentKeyValuePair.getValue().getContent());
+ System.out.printf("Value content bounding region: %s%n", documentKeyValuePair.getValue().getBoundingRegions().toString());
+ }
+ });
}
+}
``` <!-- markdownlint-disable MD036 -->
Extract text, selection marks, text styles, table structures, and bounding regio
private static final String endpoint = "<your-endpoint>"; private static final String key = "<your-key>";
- public static void main(String[] args) {
-
- // create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
- DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
- .credential(new AzureKeyCredential(key))
- .endpoint(endpoint)
- .buildClient();
-
- // sample document
- String documentUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf";
-
- String modelId = "prebuilt-layout";
-
- SyncPoller < DocumentOperationResult, AnalyzeResult > analyzeLayoutResultPoller =
- client.beginAnalyzeDocumentFromUrl(modelId, documentUrl);
-
- AnalyzeResult analyzeLayoutResult = analyzeLayoutResultPoller.getFinalResult();
-
- // pages
- analyzeLayoutResult.getPages().forEach(documentPage -> {
- System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
- documentPage.getWidth(),
- documentPage.getHeight(),
- documentPage.getUnit());
-
- // lines
- documentPage.getLines().forEach(documentLine ->
- System.out.printf("Line %s is within a bounding box %s.%n",
- documentLine.getContent(),
- documentLine.getBoundingBox().toString()));
-
- // words
- documentPage.getWords().forEach(documentWord ->
- System.out.printf("Word '%s' has a confidence score of %.2f.%n",
- documentWord.getContent(),
- documentWord.getConfidence()));
-
- // selection marks
- documentPage.getSelectionMarks().forEach(documentSelectionMark ->
- System.out.printf("Selection mark is %s and is within a bounding box %s with confidence %.2f.%n",
- documentSelectionMark.getState().toString(),
- documentSelectionMark.getBoundingBox().toString(),
- documentSelectionMark.getConfidence()));
- });
+ public static void main(String[] args) {
- // tables
- List < DocumentTable > tables = analyzeLayoutResult.getTables();
- for (int i = 0; i < tables.size(); i++) {
- DocumentTable documentTable = tables.get(i);
- System.out.printf("Table %d has %d rows and %d columns.%n", i, documentTable.getRowCount(),
- documentTable.getColumnCount());
- documentTable.getCells().forEach(documentTableCell -> {
- System.out.printf("Cell '%s', has row index %d and column index %d.%n", documentTableCell.getContent(),
- documentTableCell.getRowIndex(), documentTableCell.getColumnIndex());
- });
- System.out.println();
- }
+ // create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+ DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
+ .credential(new AzureKeyCredential(key))
+ .endpoint(endpoint)
+ .buildClient();
+
+ // sample document
+ String documentUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf";
+ String modelId = "prebuilt-layout";
+
+ SyncPoller < DocumentOperationResult, AnalyzeResult > analyzeLayoutResultPoller =
+ client.beginAnalyzeDocumentFromUrl(modelId, documentUrl);
+
+ AnalyzeResult analyzeLayoutResult = analyzeLayoutResultPoller.getFinalResult();
+
+ // pages
+ analyzeLayoutResult.getPages().forEach(documentPage -> {
+ System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
+ documentPage.getWidth(),
+ documentPage.getHeight(),
+ documentPage.getUnit());
+
+ // lines
+ documentPage.getLines().forEach(documentLine ->
+ System.out.printf("Line %s is within a bounding polygon %s.%n",
+ documentLine.getContent(),
+ documentLine.getBoundingPolygon().toString()));
+
+ // words
+ documentPage.getWords().forEach(documentWord ->
+ System.out.printf("Word '%s' has a confidence score of %.2f%n",
+ documentWord.getContent(),
+ documentWord.getConfidence()));
+
+ // selection marks
+ documentPage.getSelectionMarks().forEach(documentSelectionMark ->
+ System.out.printf("Selection mark is %s and is within a bounding polygon %s with confidence %.2f.%n",
+ documentSelectionMark.getState().toString(),
+ documentSelectionMark.getBoundingPolygon().toString(),
+ documentSelectionMark.getConfidence()));
+ });
+
+ // tables
+ List < DocumentTable > tables = analyzeLayoutResult.getTables();
+ for (int i = 0; i < tables.size(); i++) {
+ DocumentTable documentTable = tables.get(i);
+ System.out.printf("Table %d has %d rows and %d columns.%n", i, documentTable.getRowCount(),
+ documentTable.getColumnCount());
+ documentTable.getCells().forEach(documentTableCell -> {
+ System.out.printf("Cell '%s', has row index %d and column index %d.%n", documentTableCell.getContent(),
+ documentTableCell.getRowIndex(), documentTableCell.getColumnIndex());
+ });
+ System.out.println();
} }+
+}
+ ``` **Build and run the application**
Analyze and extract common fields from specific document types using a prebuilt
// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
- .credential(new AzureKeyCredential(key))
- .endpoint(endpoint)
- .buildClient();
+ .credential(new AzureKeyCredential(key))
+ .endpoint(endpoint)
+ .buildClient();
// sample document String invoiceUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf"; String modelId = "prebuilt-invoice";
- SyncPoller < DocumentOperationResult, AnalyzeResult > analyzeInvoicePoller = client.beginAnalyzeDocumentFromUrl(modelId, invoiceUrl);
+ SyncPoller<DocumentOperationResult, AnalyzeResult> analyzeInvoicePoller = client.beginAnalyzeDocumentFromUrl(modelId, invoiceUrl);
AnalyzeResult analyzeInvoiceResult = analyzeInvoicePoller.getFinalResult(); + for (int i = 0; i < analyzeInvoiceResult.getDocuments().size(); i++) {
- AnalyzedDocument analyzedInvoice = analyzeInvoiceResult.getDocuments().get(i);
- Map < String, DocumentField > invoiceFields = analyzedInvoice.getFields();
- System.out.printf("-- Analyzing invoice %d --%n", i);
- System.out.printf("Analyzed document has doc type %s with confidence : %.2f%n",
- analyzedInvoice.getDocType(), analyzedInvoice.getConfidence());
-
- DocumentField vendorNameField = invoiceFields.get("VendorName");
- if (vendorNameField != null) {
- if (DocumentFieldType.STRING == vendorNameField.getType()) {
- String merchantName = vendorNameField.getValueString();
- Float confidence = vendorNameField.getConfidence();
- System.out.printf("Vendor Name: %s, confidence: %.2f%n",
- merchantName, vendorNameField.getConfidence());
- }
- }
-
- DocumentField vendorAddressField = invoiceFields.get("VendorAddress");
- if (vendorAddressField != null) {
- if (DocumentFieldType.STRING == vendorAddressField.getType()) {
- String merchantAddress = vendorAddressField.getValueString();
- System.out.printf("Vendor address: %s, confidence: %.2f%n",
- merchantAddress, vendorAddressField.getConfidence());
- }
- }
-
- DocumentField customerNameField = invoiceFields.get("CustomerName");
- if (customerNameField != null) {
- if (DocumentFieldType.STRING == customerNameField.getType()) {
- String merchantAddress = customerNameField.getValueString();
- System.out.printf("Customer Name: %s, confidence: %.2f%n",
- merchantAddress, customerNameField.getConfidence());
- }
- }
-
- DocumentField customerAddressRecipientField = invoiceFields.get("CustomerAddressRecipient");
- if (customerAddressRecipientField != null) {
- if (DocumentFieldType.STRING == customerAddressRecipientField.getType()) {
- String customerAddr = customerAddressRecipientField.getValueString();
- System.out.printf("Customer Address Recipient: %s, confidence: %.2f%n",
- customerAddr, customerAddressRecipientField.getConfidence());
- }
- }
-
- DocumentField invoiceIdField = invoiceFields.get("InvoiceId");
- if (invoiceIdField != null) {
- if (DocumentFieldType.STRING == invoiceIdField.getType()) {
- String invoiceId = invoiceIdField.getValueString();
- System.out.printf("Invoice ID: %s, confidence: %.2f%n",
- invoiceId, invoiceIdField.getConfidence());
- }
- }
-
- DocumentField invoiceDateField = invoiceFields.get("InvoiceDate");
- if (customerNameField != null) {
- if (DocumentFieldType.DATE == invoiceDateField.getType()) {
- LocalDate invoiceDate = invoiceDateField.getValueDate();
- System.out.printf("Invoice Date: %s, confidence: %.2f%n",
- invoiceDate, invoiceDateField.getConfidence());
+ AnalyzedDocument analyzedInvoice = analyzeInvoiceResult.getDocuments().get(i);
+ Map<String, DocumentField> invoiceFields = analyzedInvoice.getFields();
+ System.out.printf("-- Analyzing invoice %d --%n", i);
+ System.out.printf("Analyzed document has doc type %s with confidence : %.2f%n",
+ analyzedInvoice.getDocType(), analyzedInvoice.getConfidence());
+
+ DocumentField vendorNameField = invoiceFields.get("VendorName");
+ if (vendorNameField != null) {
+ if (DocumentFieldType.STRING == vendorNameField.getType()) {
+ String merchantName = vendorNameField.getValueString();
+ Float confidence = vendorNameField.getConfidence();
+ System.out.printf("Vendor Name: %s, confidence: %.2f%n",
+ merchantName, vendorNameField.getConfidence());
+ }
}
- }
-
- DocumentField invoiceTotalField = invoiceFields.get("InvoiceTotal");
- if (customerAddressRecipientField != null) {
- if (DocumentFieldType.FLOAT == invoiceTotalField.getType()) {
- Float invoiceTotal = invoiceTotalField.getValueFloat();
- System.out.printf("Invoice Total: %.2f, confidence: %.2f%n",
- invoiceTotal, invoiceTotalField.getConfidence());
++
+ DocumentField vendorAddressField = invoiceFields.get("VendorAddress");
+ if (vendorAddressField != null) {
+ if (DocumentFieldType.STRING == vendorAddressField.getType()) {
+ String merchantAddress = vendorAddressField.getValueString();
+ System.out.printf("Vendor address: %s, confidence: %.2f%n",
+ merchantAddress, vendorAddressField.getConfidence());
+ }
}
- }
-
- DocumentField invoiceItemsField = invoiceFields.get("Items");
- if (invoiceItemsField != null) {
- System.out.printf("Invoice Items: %n");
- if (DocumentFieldType.LIST == invoiceItemsField.getType()) {
- List < DocumentField > invoiceItems = invoiceItemsField.getValueList();
- invoiceItems.stream()
- .filter(invoiceItem -> DocumentFieldType.MAP == invoiceItem.getType())
- .map(formField -> formField.getValueMap())
- .forEach(formFieldMap -> formFieldMap.forEach((key, formField) -> {
- // See a full list of fields found on an invoice here:
- // https://aka.ms/formrecognizer/invoicefields
- if ("Description".equals(key)) {
- if (DocumentFieldType.STRING == formField.getType()) {
- String name = formField.getValueString();
- System.out.printf("Description: %s, confidence: %.2fs%n",
- name, formField.getConfidence());
+
+ DocumentField customerNameField = invoiceFields.get("CustomerName");
+ if (customerNameField != null) {
+ if (DocumentFieldType.STRING == customerNameField.getType()) {
+ String merchantAddress = customerNameField.getValueString();
+ System.out.printf("Customer Name: %s, confidence: %.2f%n",
+ merchantAddress, customerNameField.getConfidence());
}
- }
- if ("Quantity".equals(key)) {
- if (DocumentFieldType.FLOAT == formField.getType()) {
- Float quantity = formField.getValueFloat();
- System.out.printf("Quantity: %f, confidence: %.2f%n",
- quantity, formField.getConfidence());
+ }
+
+ DocumentField customerAddressRecipientField = invoiceFields.get("CustomerAddressRecipient");
+ if (customerAddressRecipientField != null) {
+ if (DocumentFieldType.STRING == customerAddressRecipientField.getType()) {
+ String customerAddr = customerAddressRecipientField.getValueString();
+ System.out.printf("Customer Address Recipient: %s, confidence: %.2f%n",
+ customerAddr, customerAddressRecipientField.getConfidence());
}
- }
- if ("UnitPrice".equals(key)) {
- if (DocumentFieldType.FLOAT == formField.getType()) {
- Float unitPrice = formField.getValueFloat();
- System.out.printf("Unit Price: %f, confidence: %.2f%n",
- unitPrice, formField.getConfidence());
+ }
+
+ DocumentField invoiceIdField = invoiceFields.get("InvoiceId");
+ if (invoiceIdField != null) {
+ if (DocumentFieldType.STRING == invoiceIdField.getType()) {
+ String invoiceId = invoiceIdField.getValueString();
+ System.out.printf("Invoice ID: %s, confidence: %.2f%n",
+ invoiceId, invoiceIdField.getConfidence());
}
- }
- if ("ProductCode".equals(key)) {
- if (DocumentFieldType.FLOAT == formField.getType()) {
- Float productCode = formField.getValueFloat();
- System.out.printf("Product Code: %f, confidence: %.2f%n",
- productCode, formField.getConfidence());
+ }
+
+ DocumentField invoiceDateField = invoiceFields.get("InvoiceDate");
+ if (customerNameField != null) {
+ if (DocumentFieldType.DATE == invoiceDateField.getType()) {
+ LocalDate invoiceDate = invoiceDateField.getValueDate();
+ System.out.printf("Invoice Date: %s, confidence: %.2f%n",
+ invoiceDate, invoiceDateField.getConfidence());
}
- }
- }));
- }
- }
- }
- }
- }
+ }
+
+ DocumentField invoiceTotalField = invoiceFields.get("InvoiceTotal");
+ if (customerAddressRecipientField != null) {
+ if (DocumentFieldType.FLOAT == invoiceTotalField.getType()) {
+ Float invoiceTotal = invoiceTotalField.getValueFloat();
+ System.out.printf("Invoice Total: %.2f, confidence: %.2f%n",
+ invoiceTotal, invoiceTotalField.getConfidence());
+ }
+ }
+ DocumentField invoiceItemsField = invoiceFields.get("Items");
+ if (invoiceItemsField != null) {
+ System.out.printf("Invoice Items: %n");
+ if (DocumentFieldType.LIST == invoiceItemsField.getType()) {
+ List < DocumentField > invoiceItems = invoiceItemsField.getValueList();
+ invoiceItems.stream()
+ .filter(invoiceItem -> DocumentFieldType.MAP == invoiceItem.getType())
+ .map(formField -> formField.getValueMap())
+ .forEach(formFieldMap -> formFieldMap.forEach((key, formField) -> {
+ // See a full list of fields found on an invoice here:
+ // https://aka.ms/formrecognizer/invoicefields
+ if ("Description".equals(key)) {
+ if (DocumentFieldType.STRING == formField.getType()) {
+ String name = formField.getValueString();
+ System.out.printf("Description: %s, confidence: %.2fs%n",
+ name, formField.getConfidence());
+ }
+ }
+ if ("Quantity".equals(key)) {
+ if (DocumentFieldType.FLOAT == formField.getType()) {
+ Float quantity = formField.getValueFloat();
+ System.out.printf("Quantity: %f, confidence: %.2f%n",
+ quantity, formField.getConfidence());
+ }
+ }
+ if ("UnitPrice".equals(key)) {
+ if (DocumentFieldType.FLOAT == formField.getType()) {
+ Float unitPrice = formField.getValueFloat();
+ System.out.printf("Unit Price: %f, confidence: %.2f%n",
+ unitPrice, formField.getConfidence());
+ }
+ }
+ if ("ProductCode".equals(key)) {
+ if (DocumentFieldType.FLOAT == formField.getType()) {
+ Float productCode = formField.getValueFloat();
+ System.out.printf("Product Code: %f, confidence: %.2f%n",
+ productCode, formField.getConfidence());
+ }
+ }
+ }));
+ }
+ }
+ }
+ }
+ }
``` **Build and run the application**
applied-ai-services Try V3 Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-javascript-sdk.md
Previously updated : 06/13/2022 Last updated : 06/22/2022 recommendations: false
Extract text, tables, structure, key-value pairs, and named entities from docume
const formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf" async function main() {
- // create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
- const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(key));
-
- const poller = await client.beginAnalyzeDocument("prebuilt-document", formUrl);
-
- const {
- keyValuePairs,
- entities
- } = await poller.pollUntilDone();
+ const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(key));
+
+ const poller = await client.beginAnalyzeDocument("prebuilt-document", formUrl);
+
+ const {
+ keyValuePairs,
+ entities
+ } = await poller.pollUntilDone();
+
+ if (keyValuePairs.length <= 0) {
+ console.log("No key-value pairs were extracted from the document.");
+ } else {
+ console.log("Key-Value Pairs:");
+ for (const {
+ key,
+ value,
+ confidence
+ } of keyValuePairs) {
+ console.log("- Key :", `"${key.content}"`);
+ console.log(" Value:", `"${value?.content ?? "<undefined>"}" (${confidence})`);
+ }
+ }
+
+}
+
+main().catch((error) => {
+ console.error("An error occurred:", error);
+ process.exit(1);
+});
- if (keyValuePairs.length <= 0) {
- console.log("No key-value pairs were extracted from the document.");
- } else {
- console.log("Key-Value Pairs:");
- for (const {
- key,
- value,
- confidence
- } of keyValuePairs) {
- console.log("- Key :", `"${key.content}"`);
- console.log(" Value:", `"${value?.content ?? "<undefined>"}" (${confidence})`);
- }
- }
-
- if (entities.length <= 0) {
- console.log("No entities were extracted from the document.");
- } else {
- console.log("Entities:");
- for (const entity of entities) {
- console.log(
- `- "${entity.content}" ${entity.category} - ${entity.subCategory ?? "<none>"} (${
- entity.confidence
- })`
- );
- }
- }
- }
-
- main().catch((error) => {
- console.error("An error occurred:", error);
- process.exit(1);
- });
``` **Run your application**
applied-ai-services Try V3 Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-python-sdk.md
Previously updated : 03/31/2022 Last updated : 06/23/2022 recommendations: false
key = "<your-key>"
def format_bounding_region(bounding_regions): if not bounding_regions: return "N/A"
- return ", ".join("Page #{}: {}".format(region.page_number, format_bounding_box(region.bounding_box)) for region in bounding_regions)
+ return ", ".join("Page #{}: {}".format(region.page_number, format_polygon(region.polygon)) for region in bounding_regions)
-def format_bounding_box(bounding_box):
- if not bounding_box:
+def format_polygon(polygon):
+ if not polygon:
return "N/A"
- return ", ".join(["[{}, {}]".format(p.x, p.y) for p in bounding_box])
+ return ", ".join(["[{}, {}]".format(p.x, p.y) for p in polygon])
def analyze_general_documents():
def analyze_general_documents():
) )
- print("-Entities found in document-")
- for entity in result.entities:
- print("Entity of category '{}' with sub-category '{}'".format(entity.category, entity.sub_category))
- print("...has content '{}'".format(entity.content))
- print("...within '{}' bounding regions".format(format_bounding_region(entity.bounding_regions)))
- print("...with confidence {}\n".format(entity.confidence))
- for page in result.pages: print("-Analyzing document from page #{}-".format(page.page_number)) print(
def analyze_general_documents():
"...Line # {} has text content '{}' within bounding box '{}'".format( line_idx, line.content,
- format_bounding_box(line.bounding_box),
+ format_polygon(line.polygon),
) )
def analyze_general_documents():
print( "...Selection mark is '{}' within bounding box '{}' and has a confidence of {}".format( selection_mark.state,
- format_bounding_box(selection_mark.bounding_box),
+ format_polygon(selection_mark.polygon),
selection_mark.confidence, ) )
def analyze_general_documents():
"Table # {} location on page: {} is {}".format( table_idx, region.page_number,
- format_bounding_box(region.bounding_box),
+ format_polygon(region.polygon),
) ) for cell in table.cells:
def analyze_general_documents():
print( "...content on page {} is within bounding box '{}'\n".format( region.page_number,
- format_bounding_box(region.bounding_box),
+ format_polygon(region.polygon),
) ) print("-")
from azure.core.credentials import AzureKeyCredential
endpoint = "<your-endpoint>" key = "<your-key>"
-def format_bounding_box(bounding_box):
- if not bounding_box:
+def format_polygon(polygon):
+ if not polygon:
return "N/A"
- return ", ".join(["[{}, {}]".format(p.x, p.y) for p in bounding_box])
+ return ", ".join(["[{}, {}]".format(p.x, p.y) for p in polygon])
def analyze_layout(): # sample form document formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"
- # create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
- document_analysis_client = DocumentAnalysisClient(endpoint=endpoint, credential=AzureKeyCredential(key))
+ document_analysis_client = DocumentAnalysisClient(
+ endpoint=endpoint, credential=AzureKeyCredential(key)
+ )
poller = document_analysis_client.begin_analyze_document_from_url( "prebuilt-layout", formUrl)
def analyze_layout():
line_idx, len(words), line.content,
- format_bounding_box(line.bounding_box),
+ format_polygon(line.polygon),
) )
def analyze_layout():
print( "...Selection mark is '{}' within bounding box '{}' and has a confidence of {}".format( selection_mark.state,
- format_bounding_box(selection_mark.bounding_box),
+ format_polygon(selection_mark.polygon),
selection_mark.confidence, ) )
def analyze_layout():
"Table # {} location on page: {} is {}".format( table_idx, region.page_number,
- format_bounding_box(region.bounding_box),
+ format_polygon(region.polygon),
) ) for cell in table.cells:
def analyze_layout():
print( "...content on page {} is within bounding box '{}'".format( region.page_number,
- format_bounding_box(region.bounding_box),
+ format_polygon(region.polygon),
) )
key = "<your-key>"
def format_bounding_region(bounding_regions): if not bounding_regions: return "N/A"
- return ", ".join("Page #{}: {}".format(region.page_number, format_bounding_box(region.bounding_box)) for region in bounding_regions)
+ return ", ".join("Page #{}: {}".format(region.page_number, format_polygon(region.polygon)) for region in bounding_regions)
-def format_bounding_box(bounding_box):
- if not bounding_box:
+def format_polygon(polygon):
+ if not polygon:
return "N/A"
- return ", ".join(["[{}, {}]".format(p.x, p.y) for p in bounding_box])
+ return ", ".join(["[{}, {}]".format(p.x, p.y) for p in polygon])
def analyze_invoice(): invoiceUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf"
- # create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
- document_analysis_client = DocumentAnalysisClient(endpoint=endpoint, credential=AzureKeyCredential(key))
+ document_analysis_client = DocumentAnalysisClient(
+ endpoint=endpoint, credential=AzureKeyCredential(key)
+ )
poller = document_analysis_client.begin_analyze_document_from_url( "prebuilt-invoice", invoiceUrl)
def analyze_invoice():
if __name__ == "__main__": analyze_invoice()+
+ print("-")
``` **Run the application**
automation Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/private-link-security.md
Before setting up your Automation account resource, consider your network isolat
### Connect to a private endpoint
-Create a private endpoint to connect our network. You can create it in the [Azure portal Private Link center](https://portal.azure.com/#blade/Microsoft_Azure_Network/PrivateLinkCenterBlade/privateendpoints). Once your changes to publicNetworkAccess and Private Link are applied, it can take up to 35 minutes for them to take effect.
+Follow the steps below to create a private endpoint for your Automation account.
-In this section, you'll create a private endpoint for your Automation account.
+1. Go to [Private Link center](https://portal.azure.com/#blade/Microsoft_Azure_Network/PrivateLinkCenterBlade/privateendpoints) in Azure portal to create a private endpoint to connect our network.
+Once your changes to public Network Access and Private Link are applied, it can take up to 35 minutes for them to take effect.
-1. On the upper-left side of the screen, select **Create a resource > Networking > Private Link Center**.
+1. On **Private Link Center**, select **Create private endpoint**.
-2. In **Private Link Center - Overview**, on the option to **Build a private connection to a service**, select **Start**.
+ :::image type="content" source="./media/private-link-security/create-private-endpoint.png" alt-text="Screenshot of how to create a private endpoint.":::
-3. In **Create a virtual machine - Basics**, enter or select the following information:
+1. On **Basics**, enter the following details:
+ - **Subscription**
+ - **Resource group**
+ - **Name**
+ - **Network Interface Name**
+ - **Region** and select **Next: Resource**.
- | Setting | Value |
- | - | -- |
- | **PROJECT DETAILS** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. You created this in the previous section. |
- | **INSTANCE DETAILS** | |
- | Name | Enter your *PrivateEndpoint*. |
- | Region | Select **YourRegion**. |
- |||
+ :::image type="content" source="./media/private-link-security/create-private-endpoint-basics.png" alt-text="Screenshot of how to create a private endpoint in Basics tab.":::
-4. Select **Next: Resource**.
+1. On **Resource**, enter the following details:
+ - **Connection method**, select the default option - *Connect to an Azure resource in my directory*.
+ - **Subscription**
+ - **Resource type**
+ - **Resource**.
+ - The **Target sub-resource** can either be *Webhook* or *DSCAndHybridWorker* as per your scenario and select **Next : Virtual Network**.
+
+ :::image type="content" source="./media/private-link-security/create-private-endpoint-resource-inline.png" alt-text="Screenshot of how to create a private endpoint in Resource tab." lightbox="./media/private-link-security/create-private-endpoint-resource-expanded.png":::
-5. In **Create a private endpoint - Resource**, enter or select the following information:
+1. On **Virtual Network**, enter the following details:
+ - **Virtual network**
+ - **Subnet**
+ - Enable the checkbox for **Enable network policies for all private endpoints in this subnet**.
+ - Select **Dynamically allocate IP address** and select **Next : DNS**.
- | Setting | Value |
- | - | -- |
- |Connection method | Select connect to an Azure resource in my directory.|
- | Subscription| Select your subscription. |
- | Resource type | Select **Microsoft.Automation/automationAccounts**. |
- | Resource |Select *myAutomationAccount*|
- |Target subresource |Select *Webhook* or *DSCAndHybridWorker* depending on your scenario.|
- |||
+ :::image type="content" source="./media/private-link-security/create-private-endpoint-virtual-network-inline.png" alt-text="Screenshot of how to create a private endpoint in Virtual network tab." lightbox="./media/private-link-security/create-private-endpoint-virtual-network-expanded.png":::
-6. Select **Next: Configuration**.
+1. On **DNS**, the data is populated as per the information entered in the **Basics**, **Resource**, **Virtual Network** tabs and it creates a Private DNS zone. Enter the following details:
+ - **Integrate with private DNS Zone**
+ - **Subscription**
+ - **Resource group** and select **Next : Tags**
-7. In **Create a private endpoint - Configuration**, enter or select the following information:
+ :::image type="content" source="./media/private-link-security/create-private-endpoint-dns-inline.png" alt-text="Screenshot of how to create a private endpoint in DNS tab." lightbox="./media/private-link-security/create-private-endpoint-dns-expanded.png":::
- | Setting | Value |
- | - | -- |
- |**NETWORKING**| |
- | Virtual network| Select *MyVirtualNetwork*. |
- | Subnet | Select *mySubnet*. |
- |**PRIVATE DNS INTEGRATION**||
- |Integrate with private DNS zone |Select **Yes**. |
- |Private DNS Zone |Select *(New)privatelink.azure-automation.net* |
- |||
+1. On **Tags**, you can categorize resources. Select **Name** and **Value** and select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-8. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-9. When you see the **Validation passed** message, select **Create**.
+On the **Private Link Center**, select **Private endpoints** to view your private link resource.
-In the **Private Link Center**, select **Private endpoints** to view your private link resource.
-
-![Automation resource private link](./media/private-link-security/private-link-automation-resource.png)
Select the resource to see all the details. This creates a new private endpoint for your Automation account and assigns it a private IP from your virtual network. The **Connection status** shows as **approved**.
automation Powershell Runbook Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/powershell-runbook-managed-identity.md
Remove-AzRoleAssignment `
## Next steps
-In this tutorial, you created a [PowerShell runbook](../automation-runbook-types.md#powershell-runbooks) in Azure Automation that used a[managed identity](../automation-security-overview.md#managed-identities), rather than the Run As account to interact with resources. For a look at PowerShell Workflow runbooks, see:
+In this tutorial, you created a [PowerShell runbook](../automation-runbook-types.md#powershell-runbooks) in Azure Automation that used a [managed identity](../automation-security-overview.md#managed-identities), rather than the Run As account to interact with resources. For a look at PowerShell Workflow runbooks, see:
> [!div class="nextstepaction"] > [Tutorial: Create a PowerShell Workflow runbook](automation-tutorial-runbook-textual.md)
automation Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new-archive.md
Manage Oracle Linux 6 and 7 machines with Automation State Configuration. See [S
**Type:** New feature
-Azure Automation now supports Python 3 cloud and hybrid runbook execution in public preview in all regions in Azure global cloud. For more information, see the [announcement]((https://azure.microsoft.com/updates/azure-automation-python-3-public-preview/).
+Azure Automation now supports Python 3 cloud and hybrid runbook execution in public preview in all regions in Azure global cloud. For more information, see the [announcement](https://azure.microsoft.com/updates/azure-automation-python-3-public-preview/).
## November 2020
avere-vfxt Avere Vfxt Additional Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-additional-resources.md
This article has links to additional documentation about the Avere Control Panel
## Avere cluster documentation
-Additional Avere cluster documentation can be found on the website at <https://azure.github.io/Avere/>. These documents can help you understand the cluster's capabilities and how to configure its settings.
+Additional Avere cluster documentation can be found on the [Avere website](https://azure.github.io/Avere/). These documents can help you understand the cluster's capabilities and how to configure its settings.
* The [FXT Cluster Creation Guide](https://azure.github.io/Avere/#fxt_cluster) is designed for clusters made up of physical hardware nodes, but some information in the document is relevant for vFXT clusters as well. In particular, new vFXT cluster administrators can benefit from reading these sections: * [Customizing Support and Monitoring Settings](https://azure.github.io/Avere/legacy/create_cluster/4_8/html/config_support.html#config-support) explains how to customize support upload settings and enable remote monitoring.
avere-vfxt Avere Vfxt Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-overview.md
Avere vFXT for Azure is best suited for these situations:
* Compute farms of 1000 to 40,000 CPU cores * Integration with on-premises hardware NAS, Azure Blob storage, or both
-For more information, visit <https://azure.microsoft.com/services/storage/avere-vfxt/>
+For more information, see [Avere vFXT for Azure](https://azure.microsoft.com/services/storage/avere-vfxt/).
## Who uses Avere vFXT for Azure?
avere-vfxt Avere Vfxt Prereqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-prereqs.md
This step only needs to be done once per subscription.
To accept the software terms in advance:
-1. Open a cloud shell in the Azure portal or by browsing to <https://shell.azure.com>. Sign in with your subscription ID.
+1. Use the [Azure Cloud Shell](https://shell.azure.com) to sign in using your subscription ID.
```azurecli az loginΓÇï
azure-app-configuration Quickstart Feature Flag Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-spring-boot.md
The Spring Boot Feature Management libraries extend the framework with comprehen
## Create a Spring Boot app
-Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boot project.
+To create a new Spring Boot project:
-1. Browse to <https://start.spring.io/>.
+1. Browse to the [Spring Initializr](https://start.spring.io).
1. Specify the following options:
azure-app-configuration Quickstart Java Spring App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-java-spring-app.md
In this quickstart, you incorporate Azure App Configuration into a Java Spring a
## Create a Spring Boot app
-Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boot project.
+To create a new Spring Boot project:
-1. Browse to <https://start.spring.io/>.
+1. Browse to the [Spring Initializr](https://start.spring.io).
1. Specify the following options:
azure-arc Storage Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/storage-configuration.md
When creating an instance using either `az sql mi-arc create` or `az postgres ar
|Parameter name, short name|Used for| |||
-|`--storage-class-data`, `-d`|Used to specify the storage class for all data files including transaction log files|
-|`--storage-class-logs`, `-g`|Used to specify the storage class for all log files|
-|`--storage-class-data-logs`|Used to specify the storage class for the database transaction log files.|
-|`--storage-class-backups`|Used to specify the storage class for all backup files. Use a ReadWriteMany (RWX) capable storage class for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). |
+|`--storage-class-data`, `-d`|Storage class for all data files (.mdf, ndf). If not specified, defaults to storage class for data controller.|
+|`--storage-class-logs`, `-g`|Storage class for all log files. If not specified, defaults to storage class for data controller.|
+|`--storage-class-data-logs`|Storage class for the database transaction log files. If not specified, defaults to storage class for data controller.|
+|`--storage-class-backups`|Storage class for all backup files. If not specified, defaults to storage class for data (`--storage-class-data`).<br/><br/> Use a ReadWriteMany (RWX) capable storage class for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). |
> [!WARNING]
-> If you don't specify a storage class for backups, the deployment uses the default storage class in Kubernetes. If this storage class isn't RWX capable, the deployment may not succeed.
+> If you don't specify a storage class for backups, the deployment uses the storage class specified for data. If this storage class isn't RWX capable, the point-in-time restore may not work as desired.
The table below lists the paths inside the Azure SQL Managed Instance container that is mapped to the persistent volume for data and logs:
-|Parameter name, short name|Path inside mssql-miaa container|Description|
+|Parameter name, short name|Path inside `mssql-miaa` container|Description|
|||| |`--storage-class-data`, `-d`|/var/opt|Contains directories for the mssql installation and other system processes. The mssql directory contains default data (including transaction logs), error log & backup directories| |`--storage-class-logs`, `-g`|/var/log|Contains directories that store console output (stderr, stdout), other logging information of processes inside the container|
azure-arc Upload Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-metrics.md
Once your metrics are uploaded, you can view them from the Azure portal.
> Please note that it can take a couple of minutes for the uploaded data to be processed before you can view the metrics in the portal.
-To view your metrics in the portal, use this link to open the portal: <https://portal.azure.com>
-Then, search for your database instance by name in the search bar:
+To view your metrics, navigate to the [Azure portal](https://portal.azure.com). Then, search for your database instance by name in the search bar:
You can view CPU utilization on the Overview page or if you want more detailed metrics you can click on metrics from the left navigation panel
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
You should see output similar to the example below.
Next, specify the Azure Key Vault to use with your connected cluster. If you don't already have one, create a new Key Vault by using the following commands. Keep in mind that the name of your Key Vault must be globally unique.
-```azurecli
-az keyvault create -n $AZUREKEYVAULT_NAME -g $AKV_RESOURCE_GROUP -l $AZUREKEYVAULT_LOCATION
-Next, set the following environment variables:
+Set the following environment variables:
```azurecli-interactive export AKV_RESOURCE_GROUP=<resource-group-name> export AZUREKEYVAULT_NAME=<AKV-name> export AZUREKEYVAULT_LOCATION=<AKV-location> ```
+Next, run the following command
+
+```azurecli
+az keyvault create -n $AZUREKEYVAULT_NAME -g $AKV_RESOURCE_GROUP -l $AZUREKEYVAULT_LOCATION
+```
Azure Key Vault can store keys, secrets, and certificates. For this example, you can set a plain text secret called `DemoSecret` by using the following command:
Currently, the Secrets Store CSI Driver on Arc-enabled clusters can be accessed
apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata:
- name: akvprovider-demo
+ name: akvprovider-demo
spec: provider: azure parameters:
For more information about resolving common issues, see the open source troubles
## Next steps - Want to try things out? Get started quickly with an [Azure Arc Jumpstart scenario](https://aka.ms/arc-jumpstart-akv-secrets-provider) using Cluster API.-- Learn more about [Azure Key Vault](../../key-vault/general/overview.md).
+- Learn more about [Azure Key Vault](../../key-vault/general/overview.md).
azure-fluid-relay Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/concepts/authentication-authorization.md
The secret key is how the Azure Fluid Relay service knows that requests are comi
Azure Fluid Relay uses [JSON Web Tokens (JWTs)](https://jwt.io/) to encode and verify data signed with your secret key. JSON Web Tokens are a signed bit of JSON that can include additional information about rights and permissions. > [!NOTE]
-> The specifics of JWTs are beyond the scope of this article. For more details about the JWT standard see
-> <https://jwt.io/introduction>.
+> The specifics of JWTs are beyond the scope of this article. For more information about the JWT standard, see [Introduction to JSON Web Tokens](https://jwt.io/introduction).
Though the details of authentication differ between Fluid services, several values must always be present.
azure-fluid-relay Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/reference/service-limits.md
+
+ Title: Azure Fluid Relay limits
+description: Limits and throttles applied in Azure Fluid Relay.
+++ Last updated : 08/19/2021++++
+# Azure Fluid Relay Limits
+
+This article outlines known limitation of Azure Fluid Relay.
+
+## Distributed Data Structures
+
+The Azure Fluid Relay doesn't support [experimental distributed data structures (DDSes)](https://fluidframework.com/docs/data-structures/experimental/). These include but are not limited to DDS packages with the `@fluid-experimental` package namespace.
+
+## Fluid sessions
+
+The maximum number of simultaneous users in one session on Azure Fluid Relay is 100 users. This limit is on simultaneous users. What this means is that the 101st user won't be allowed to join the session. In the case where an existing user leaves the session, a new user will be able to join. This is because the number of simultaneous users at that point will be less than the limit.
+
+## Fluid Summaries
+
+Incremental summaries uploaded to Azure Fluid Relay can't exceed 28 MB in size. More info [here](https://fluidframework.com/docs/concepts/summarizer).
+
+## Signals
+
+Azure Fluid Relay doesn't currently have support for Signals. Learn about Signals [here](https://fluidframework.com/docs/concepts/signals/).
+
+## Need help?
+
+If you need help with any of the above limits or other Azure Fluid Relay topics, see the [support](../resources/support.md) options available to you.
azure-fluid-relay Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/resources/support.md
+
+ Title: Azure Fluid Relay support
+description: Help and support options for Azure Fluid Relay.
+++ Last updated : 08/19/2021++++
+# Help and Support options for Azure Fluid Relay
+
+If you have an issue or question involving Azure Fluid Relay, the following options are available.
+
+## Check out Frequently Asked Questions
+
+You can see if your question is already answered on our Frequently Asked Questions [page](faq.md).
+
+## Create an Azure Support Request
+
+With Azure, there are many [support options and plans](https://azure.microsoft.com/support/plans/) available, which you can explore and review. You can create a support ticket in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
+
+## Post a question to Microsoft Q&A
+
+For quick and reliable answers to product or technical questions you might have about Azure Fluid Relay from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our community, engage with us on [Microsoft Q&A](/answers/products/azure).
+
+If you can't find an answer to your problem by searching you can, submit a new question to Microsoft Q&A. When creating a question, make sure to use the [Azure Fluid Relay tag](/answers/topics/azure-fluid-relay.html).
+
+## Post a question on Stack Overflow
+
+You can also try asking your question on Stack Overflow, which has a large community developer community and ecosystem. Azure Fluid Relay has a [dedicated tag](https://stackoverflow.com/questions/tagged/azure-fluid-relay) there too.
+
+## Fluid Framework
+
+For questions about the Fluid Framework, you can file issues on [GitHub](https://github.com/microsoft/fluidframework). The [Fluid Framework site](https://fluidframework.com) is another good source of information about the framework.
+
azure-functions Create First Function Arc Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-arc-custom-container.md
In Azure Functions, a function project is the context for one or more individual
cd LocalFunctionProj ```
- This folder contains the Dockerfile other files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). By default, the *local.settings.json* file is excluded from source control in the *.gitignore* file. This exclusion is because the file can contain secrets that are downloaded from Azure.
+ This folder contains the `Dockerfile` and other files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). By default, the *local.settings.json* file is excluded from source control in the *.gitignore* file. This exclusion is because the file can contain secrets that are downloaded from Azure.
1. Open the generated `Dockerfile` and locate the `3.0` tag for the base image. If there's a `3.0` tag, replace it with a `3.0.15885` tag. For example, in a JavaScript application, the Docker file should be modified to have `FROM mcr.microsoft.com/azure-functions/node:3.0.15885`. This version of the base image supports deployment to an Azure Arc-enabled Kubernetes cluster.
azure-functions Create Function App Linux App Service Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-function-app-linux-app-service-plan.md
Azure Functions lets you host your functions on Linux in a default Azure App Ser
## Sign in to Azure
-Sign in to the Azure portal at <https://portal.azure.com> with your Azure account.
+Sign in to the [Azure portal](https://portal.azure.com) using your Azure account.
## Create a function app
azure-functions Functions Deployment Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-slots.md
Azure Functions deployment slots allow your function app to run different instan
The following reflect how functions are affected by swapping slots: -- Traffic redirection is seamless; no requests are dropped because of a swap.-- If a function is running during a swap, execution continues and the next triggers are routed to the swapped app instance.
+- Traffic redirection is seamless; no requests are dropped because of a swap. This seamless behavior is a result of the next function triggers being routed to the swapped slot.
+- Currently executing function are terminated during the swap. Please review [Improve the performance and reliability of Azure Functions](performance-reliability.md#write-functions-to-be-stateless) to learn how to write stateless and defensive functions.
## Why use slots?
Some configuration settings are slot-specific. The following lists detail which
**Slot-specific settings**:
-* Publishing endpoints
-* Custom domain names
-* Non-public certificates and TLS/SSL settings
-* Scale settings
-* IP restrictions
-* Always On
-* Diagnostic settings
-* Cross-origin resource sharing (CORS)
+- Publishing endpoints
+- Custom domain names
+- Non-public certificates and TLS/SSL settings
+- Scale settings
+- IP restrictions
+- Always On
+- Diagnostic settings
+- Cross-origin resource sharing (CORS)
**Non slot-specific settings**:
-* General settings, such as framework version, 32/64-bit, web sockets
-* App settings (can be configured to stick to a slot)
-* Connection strings (can be configured to stick to a slot)
-* Handler mappings
-* Public certificates
-* Hybrid connections *
-* Virtual network integration *
-* Service endpoints *
-* Azure Content Delivery Network *
+- General settings, such as framework version, 32/64-bit, web sockets
+- App settings (can be configured to stick to a slot)
+- Connection strings (can be configured to stick to a slot)
+- Handler mappings
+- Public certificates
+- Hybrid connections *
+- Virtual network integration *
+- Service endpoints *
+- Azure Content Delivery Network *
-Features marked with an asterisk (*) are planned to be unswapped.
+Features marked with an asterisk (*) are planned to be unswapped.
> [!NOTE] > Certain app settings that apply to unswapped settings are also not swapped. For example, since diagnostic settings are not swapped, related app settings like `WEBSITE_HTTPLOGGING_RETENTION_DAYS` and `DIAGNOSTICS_AZUREBLOBRETENTIONDAYS` are also not swapped, even if they don't show up as slot settings.
You can swap slots via the [CLI](/cli/azure/functionapp/deployment/slot#az-funct
:::image type="content" source="./media/functions-deployment-slots/functions-swap-deployment-slot.png" alt-text="Screenshot that shows the 'Deployment slot' page with the 'Add Slot' action selected." border="true"::: 1. Verify the configuration settings for your swap and select **Swap**
-
+ :::image type="content" source="./media/functions-deployment-slots/azure-functions-deployment-slots-swap-config.png" alt-text="Swap the deployment slot." border="true"::: The operation may take a moment while the swap operation is executing.
azure-functions Functions Deployment Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-technologies.md
Last updated 05/18/2022
# Deployment technologies in Azure Functions
-You can use a few different technologies to deploy your Azure Functions project code to Azure. This article provides an overview of the deployment methods available to you and recommendations for the best method to use in various scenarios. It also provides an exhaustive list of and key details about the underlying deployment technologies.
+You can use a few different technologies to deploy your Azure Functions project code to Azure. This article provides an overview of the deployment methods available to you and recommendations for the best method to use in various scenarios. It also provides an exhaustive list of and key details about the underlying deployment technologies.
## Deployment methods
The following table describes the available deployment methods for your Function
| Deployment&nbsp;type | Methods | Best for... | | -- | -- | -- |
-| Tools-based | &bull;&nbsp;[Visual&nbsp;Studio&nbsp;Code&nbsp;publish](functions-develop-vs-code.md#publish-to-azure)<br/>&bull;&nbsp;[Visual Studio publish](functions-develop-vs.md#publish-to-azure)<br/>&bull;&nbsp;[Core Tools publish](functions-run-local.md#publish) | Deployments during development and other ad hoc deployments. Deployments are managed locally by the tooling. |
+| Tools-based | &bull;&nbsp;[Visual&nbsp;Studio&nbsp;Code&nbsp;publish](functions-develop-vs-code.md#publish-to-azure)<br/>&bull;&nbsp;[Visual Studio publish](functions-develop-vs.md#publish-to-azure)<br/>&bull;&nbsp;[Core Tools publish](functions-run-local.md#publish) | Deployments during development and other ad hoc deployments. Deployments are managed locally by the tooling. |
| App Service-managed| &bull;&nbsp;[Deployment&nbsp;Center&nbsp;(CI/CD)](functions-continuous-deployment.md)<br/>&bull;&nbsp;[Container&nbsp;deployments](functions-create-function-linux-custom-image.md#enable-continuous-deployment-to-azure) | Continuous deployment (CI/CD) from source control or from a container registry. Deployments are managed by the App Service platform (Kudu).| | External pipelines|&bull;&nbsp;[Azure Pipelines](functions-how-to-azure-devops.md)<br/>&bull;&nbsp;[GitHub Actions](functions-how-to-github-actions.md) | Production and DevOps pipelines that include additional validation, testing, and other actions be run as part of an automated deployment. Deployments are managed by the pipeline. |
Some key concepts are critical to understanding how deployments work in Azure Fu
When you change any of your triggers, the Functions infrastructure must be aware of the changes. Synchronization happens automatically for many deployment technologies. However, in some cases, you must manually sync your triggers. When you deploy your updates by referencing an external package URL, local Git, cloud sync, or FTP, you must manually sync your triggers. You can sync triggers in one of three ways:
-* Restart your function app in the Azure portal.
-* Send an HTTP POST request to `https://{functionappname}.azurewebsites.net/admin/host/synctriggers?code=<API_KEY>` using the [master key](functions-bindings-http-webhook-trigger.md#authorization-keys).
-* Send an HTTP POST request to `https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>/providers/Microsoft.Web/sites/<FUNCTION_APP_NAME>/syncfunctiontriggers?api-version=2016-08-01`. Replace the placeholders with your subscription ID, resource group name, and the name of your function app.
++ Restart your function app in the Azure portal.++ Send an HTTP POST request to `https://{functionappname}.azurewebsites.net/admin/host/synctriggers?code=<API_KEY>` using the [master key](functions-bindings-http-webhook-trigger.md#authorization-keys).++ Send an HTTP POST request to `https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>/providers/Microsoft.Web/sites/<FUNCTION_APP_NAME>/syncfunctiontriggers?api-version=2016-08-01`. Replace the placeholders with your subscription ID, resource group name, and the name of your function app. When you deploy using an external package URL and the contents of the package change but the URL itself doesn't change, you need to manually restart your function app to fully sync your updates.
When an app is deployed to Windows, language-specific commands, like `dotnet res
To enable remote build on Linux, the following [application settings](functions-how-to-use-azure-function-app-settings.md#settings) must be set:
-* `ENABLE_ORYX_BUILD=true`
-* `SCM_DO_BUILD_DURING_DEPLOYMENT=true`
++ `ENABLE_ORYX_BUILD=true`++ `SCM_DO_BUILD_DURING_DEPLOYMENT=true` By default, both [Azure Functions Core Tools](functions-run-local.md) and the [Azure Functions Extension for Visual Studio Code](./create-first-function-vs-code-csharp.md#publish-the-project-to-azure) perform remote builds when deploying to Linux. Because of this, both tools automatically create these settings for you in Azure.
You can use an external package URL to reference a remote package (.zip) file th
> >If you use Azure Blob storage, use a private container with a [shared access signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) to give Functions access to the package. Any time the application restarts, it fetches a copy of the content. Your reference must be valid for the lifetime of the application.
->__When to use it:__ External package URL is the only supported deployment method for Azure Functions running on Linux in the Consumption plan, if the user doesn't want a [remote build](#remote-build) to occur. When you update the package file that a function app references, you must [manually sync triggers](#trigger-syncing) to tell Azure that your application has changed. When you change the contents of the package file and not the URL itself, you must also restart your function app manually.
+>__When to use it:__ External package URL is the only supported deployment method for Azure Functions running on Linux in the Consumption plan, if the user doesn't want a [remote build](#remote-build) to occur. When you update the package file that a function app references, you must [manually sync triggers](#trigger-syncing) to tell Azure that your application has changed. When you change the contents of the package file and not the URL itself, you must also restart your function app manually.
### Zip deploy
You can deploy a Linux container image that contains your function app.
>__How to use it:__ Create a Linux function app in the Premium or Dedicated plan and specify which container image to run from. You can do this in two ways: >
->* Create a Linux function app on an Azure App Service plan in the Azure portal. For **Publish**, select **Docker Image**, and then configure the container. Enter the location where the image is hosted.
->* Create a Linux function app on an App Service plan by using the Azure CLI. To learn how, see [Create a function on Linux by using a custom image](functions-create-function-linux-custom-image.md#create-supporting-azure-resources-for-your-function).
+>+ Create a Linux function app on an Azure App Service plan in the Azure portal. For **Publish**, select **Docker Image**, and then configure the container. Enter the location where the image is hosted.
+>+ Create a Linux function app on an App Service plan by using the Azure CLI. To learn how, see [Create a function on Linux by using a custom image](functions-create-function-linux-custom-image.md#create-supporting-azure-resources-for-your-function).
> >To deploy to a Kubernetes cluster as a custom container, in [Azure Functions Core Tools](functions-run-local.md), use the [`func kubernetes deploy`](functions-core-tools-reference.md#func-kubernetes-deploy) command.
In the portal-based editor, you can directly edit the files that are in your fun
>__When to use it:__ The portal is a good way to get started with Azure Functions. For more intense development work, we recommend that you use one of the following client tools: >
->* [Visual Studio Code](./create-first-function-vs-code-csharp.md)
->* [Azure Functions Core Tools (command line)](functions-run-local.md)
->* [Visual Studio](functions-create-your-first-function-visual-studio.md)
+>+ [Visual Studio Code](./create-first-function-vs-code-csharp.md)
+>+ [Azure Functions Core Tools (command line)](functions-run-local.md)
+>+ [Visual Studio](functions-create-your-first-function-visual-studio.md)
The following table shows the operating systems and languages that support portal editing:
The following table shows the operating systems and languages that support porta
## Deployment behaviors
-When you do a deployment, all existing executions are allowed to complete or time out, after which the new code is loaded to begin processing requests.
+When you deploy updates to your function app code, currently executing functions are terminated. After deployment completes, the new code is loaded to begin processing requests. Please review [Improve the performance and reliability of Azure Functions](performance-reliability.md#write-functions-to-be-stateless) to learn how to write stateless and defensive functions.
If you need more control over this transition, you should use deployment slots.
azure-functions Functions Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-powershell.md
Logging in PowerShell functions works like regular PowerShell logging. You can u
| - | -- | | Error | **`Write-Error`** | | Warning | **`Write-Warning`** |
-| Information | **`Write-Information`** <br/> **`Write-Host`** <br /> **`Write-Output`** <br/> Writes to _Information_ level logging. |
+| Information | **`Write-Information`** <br/> **`Write-Host`** <br /> **`Write-Output`** <br/> Writes to the `Information` log level. |
| Debug | **`Write-Debug`** | | Trace | **`Write-Progress`** <br /> **`Write-Verbose`** |
Functions lets you leverage [PowerShell gallery](https://www.powershellgallery.c
} ```
-When you create a new PowerShell functions project, dependency management is enabled by default, with the Azure [`Az` module](/powershell/azure/new-azureps-module-az) included. The maximum number of modules currently supported is 10. The supported syntax is _`MajorNumber`_`.*` or exact module version as shown in the following requirements.psd1 example:
+When you create a new PowerShell functions project, dependency management is enabled by default, with the Azure [`Az` module](/powershell/azure/new-azureps-module-az) included. The maximum number of modules currently supported is 10. The supported syntax is *`MajorNumber.*`* or exact module version, as shown in the following requirements.psd1 example:
```powershell @{
In this way, the older version of the Az.Account module is loaded first when the
The following considerations apply when using dependency management:
-+ Managed dependencies requires access to <https://www.powershellgallery.com> to download modules. When running locally, make sure that the runtime can access this URL by adding any required firewall rules.
++ Managed dependencies requires access to `https://www.powershellgallery.com` to download modules. When running locally, make sure that the runtime can access this URL by adding any required firewall rules. + Managed dependencies currently don't support modules that require the user to accept a license, either by accepting the license interactively, or by providing `-AcceptLicense` switch when invoking `Install-Module`.
Depending on your use case, Durable Functions may significantly improve scalabil
### Considerations for using concurrency
-PowerShell is a _single threaded_ scripting language by default. However, concurrency can be added by using multiple PowerShell runspaces in the same process. The amount of runspaces created, and therefore the number of concurrent threads per worker, is limited by the ```PSWorkerInProcConcurrencyUpperBound``` application setting. By default, the number of runspaces is set to 1,000 in version 4.x of the Functions runtime. In versions 3.x and below, the maximum number of runspaces is set to 1. The throughput will be impacted by the amount of CPU and memory available in the selected plan.
+PowerShell is a *single_threaded* scripting language by default. However, concurrency can be added by using multiple PowerShell runspaces in the same process. The number of runspaces created, and therefore the number of concurrent threads per worker, is limited by the `PSWorkerInProcConcurrencyUpperBound` application setting. By default, the number of runspaces is set to 1,000 in version 4.x of the Functions runtime. In versions 3.x and below, the maximum number of runspaces is set to 1. The throughput will be impacted by the amount of CPU and memory available in the selected plan.
-Azure PowerShell uses some _process-level_ contexts and state to help save you from excess typing. However, if you turn on concurrency in your function app and invoke actions that change state, you could end up with race conditions. These race conditions are difficult to debug because one invocation relies on a certain state and the other invocation changed the state.
+Azure PowerShell uses some *process-level* contexts and state to help save you from excess typing. However, if you turn on concurrency in your function app and invoke actions that change state, you could end up with race conditions. These race conditions are difficult to debug because one invocation relies on a certain state and the other invocation changed the state.
There's immense value in concurrency with Azure PowerShell, since some operations can take a considerable amount of time. However, you must proceed with caution. If you suspect that you're experiencing a race condition, set the PSWorkerInProcConcurrencyUpperBound app setting to `1` and instead use [language worker process level isolation](functions-app-settings.md#functions_worker_process_count) for concurrency.
azure-functions Performance Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/performance-reliability.md
Assume your function could encounter an exception at any time. Design your funct
Depending on how complex your system is, you may have: involved downstream services behaving badly, networking outages, or quota limits reached, etc. All of these can affect your function at any time. You need to design your functions to be prepared for it.
-How does your code react if a failure occurs after inserting 5,000 of those items into a queue for processing? Track items in a set that youΓÇÖve completed. Otherwise, you might insert them again next time. This double-insertion can have a serious impact on your work flow, so [make your functions idempotent](functions-idempotent.md).
+How does your code react if a failure occurs after inserting 5,000 of those items into a queue for processing? Track items in a set that youΓÇÖve completed. Otherwise, you might insert them again next time. This double-insertion can have a serious impact on your work flow, so [make your functions idempotent](functions-idempotent.md).
If a queue item was already processed, allow your function to be a no-op. Take advantage of defensive measures already provided for components you use in the Azure Functions platform. For example, see **Handling poison queue messages** in the documentation for [Azure Storage Queue triggers and bindings](functions-bindings-storage-queue-trigger.md#poison-messages).
+For HTTP based functions consider [API versioning strategies](/azure/architecture/reference-architectures/serverless/web-app#api-versioning) with Azure API Management. For example, if you have to update your HTTP based function app, deploy the new update to a separate function app and use API Management revisions or versions to direct clients to the new version or revision. Once all clients are using the version or revision and no more executions are left on the previous function app, you can deprovision the previous function app.
+ ## Function organization best practices As part of your solution, you may develop and publish multiple functions. These functions are often combined into a single function app, but they can also run in separate function apps. In Premium and dedicated (App Service) hosting plans, multiple function apps can also share the same resources by running in the same plan. How you group your functions and function apps can impact the performance, scaling, configuration, deployment, and security of your overall solution. There aren't rules that apply to every scenario, so consider the information in this section when planning and developing your functions.
azure-functions Run Functions From Deployment Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/run-functions-from-deployment-package.md
The following table indicates the recommended `WEBSITE_RUN_FROM_PACKAGE` options
## General considerations
-+ The package file must be .zip formatted. Tar and gzip formats aren't currently supported.
++ The package file must be .zip formatted. Tar and gzip formats aren't currently supported. + [Zip deployment](#integration-with-zip-deployment) is recommended. + When deploying your function app to Windows, you should set `WEBSITE_RUN_FROM_PACKAGE` to `1` and publish with zip deployment. + When you run from a package, the `wwwroot` folder becomes read-only and you'll receive an error when writing files to this directory. Files are also read-only in the Azure portal. + The maximum size for a deployment package file is currently 1 GB.
-+ You can't use local cache when running from a deployment package.
++ You can't use local cache when running from a deployment package. + If your project needs to use remote build, don't use the `WEBSITE_RUN_FROM_PACKAGE` app setting. Instead add the `SCM_DO_BUILD_DURING_DEPLOYMENT=true` deployment customization app setting. For Linux, also add the `ENABLE_ORYX_BUILD=true` setting. To learn more, see [Remote build](functions-deployment-technologies.md#remote-build). ### Adding the WEBSITE_RUN_FROM_PACKAGE setting
The following table indicates the recommended `WEBSITE_RUN_FROM_PACKAGE` options
## Using WEBSITE_RUN_FROM_PACKAGE = 1
-This section provides information about how to run your function app from a local package file.
+This section provides information about how to run your function app from a local package file.
### Considerations for deploying from an on-site package
-+ Using an on-site package is the recommended option for running from the deployment package, except on Linux hosted in a Consumption plan.
-+ [Zip deployment](#integration-with-zip-deployment) is the recommended way to upload a deployment package to your site.
-+ When not using zip deployment, make sure the `d:\home\data\SitePackages` (Windows) or `/home/data/SitePackages` (Linux) folder has a file named `packagename.txt`. This file contains only the name, without any whitespace, of the package file in this folder that's currently running.
++ Using an on-site package is the recommended option for running from the deployment package, except on Linux hosted in a Consumption plan.++ [Zip deployment](#integration-with-zip-deployment) is the recommended way to upload a deployment package to your site.++ When not using zip deployment, make sure the `d:\home\data\SitePackages` (Windows) or `/home/data/SitePackages` (Linux) folder has a file named `packagename.txt`. This file contains only the name, without any whitespace, of the package file in this folder that's currently running. ### Integration with zip deployment
-[Zip deployment][Zip deployment for Azure Functions] is a feature of Azure App Service that lets you deploy your function app project to the `wwwroot` directory. The project is packaged as a .zip deployment file. The same APIs can be used to deploy your package to the `d:\home\data\SitePackages` (Windows) or `/home/data/SitePackages` (Linux) folder.
+[Zip deployment][Zip deployment for Azure Functions] is a feature of Azure App Service that lets you deploy your function app project to the `wwwroot` directory. The project is packaged as a .zip deployment file. The same APIs can be used to deploy your package to the `d:\home\data\SitePackages` (Windows) or `/home/data/SitePackages` (Linux) folder.
With the `WEBSITE_RUN_FROM_PACKAGE` app setting value of `1`, the zip deployment APIs copy your package to the `d:\home\data\SitePackages` (Windows) or `/home/dat). > [!NOTE]
-> When a deployment occurs, a restart of the function app is triggered. Before a restart, all existing function executions are allowed to complete or time out. To learn more, see [Deployment behaviors](functions-deployment-technologies.md#deployment-behaviors).
+> When a deployment occurs, a restart of the function app is triggered. Function executions currently running during the deploy are terminated. Please review [Improve the performance and reliability of Azure Functions](performance-reliability.md#write-functions-to-be-stateless) to learn how to write stateless and defensive functions.
## Using WEBSITE_RUN_FROM_PACKAGE = URL
This section provides information about how to run your function app from a pack
<a name="troubleshooting"></a> + When running a function app on Windows, the app setting `WEBSITE_RUN_FROM_PACKAGE = <URL>` gives worse cold-start performance and isn't recommended.
-+ When you specify a URL, you must also [manually sync triggers](functions-deployment-technologies.md#trigger-syncing) after you publish an updated package.
++ When you specify a URL, you must also [manually sync triggers](functions-deployment-technologies.md#trigger-syncing) after you publish an updated package. + The Functions runtime must have permissions to access the package URL.
-+ You shouldn't deploy your package to Azure Blob Storage as a public blob. Instead, use a private container with a [Shared Access Signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the Functions runtime to access the package.
++ You shouldn't deploy your package to Azure Blob Storage as a public blob. Instead, use a private container with a [Shared Access Signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the Functions runtime to access the package. + When running on a Premium plan, make sure to [eliminate cold starts](functions-premium-plan.md#eliminate-cold-starts). + When running on a Dedicated plan, make sure you've enabled [Always On](dedicated-plan.md#always-on). + You can use the [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) to upload package files to blob containers in your storage account. ### Manually uploading a package to Blob Storage
-To deploy a zipped package when using the URL option, you must create a .zip compressed deployment package and upload it to the destination. This example deploys to a container in Blob Storage.
+To deploy a zipped package when using the URL option, you must create a .zip compressed deployment package and upload it to the destination. This example deploys to a container in Blob Storage.
1. Create a .zip package for your project using the utility of your choice. 1. In the [Azure portal](https://portal.azure.com), search for your storage account name or browse for it in storage accounts.
-
+ 1. In the storage account, select **Containers** under **Data storage**. 1. Select **+ Container** to create a new Blob Storage container in your account.
To deploy a zipped package when using the URL option, you must create a .zip com
1. After the upload completes, choose your uploaded blob file, and copy the URL. You may need to generate a SAS URL if you aren't [using an identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity)
-1. Search for your function app or browse for it in the **Function App** page.
+1. Search for your function app or browse for it in the **Function App** page.
1. In your function app, select **Configurations** under **Settings**.
azure-government Documentation Government Ase Disa Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-ase-disa-cap.md
The customer has deployed an ASE with an ILB and has implemented an ExpressRoute
When creating the ASE via the portal, a route table with a default route of 0.0.0.0/0 and next hop ΓÇ£InternetΓÇ¥ is created. However, since DISA advertises a default route out the ExpressRoute circuit, the User Defined Route (UDR) should either be deleted, or remove the default route to internet.
-You will need to create new routes in the UDR for the management addresses in order to keep the ASE healthy. For Azure Government ranges see [App Service Environment management addresses](../app-service/environment/management-addresses.md
-)
+You will need to create new routes in the UDR for the management addresses in order to keep the ASE healthy. For Azure Government ranges, see [App Service Environment management addresses](../app-service/environment/management-addresses.md).
- 23.97.29.209/32 --> Internet - 13.72.53.37/32 --> Internet
azure-maps Migrate From Bing Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md
Here are some licensing-related resources for Azure Maps:
Here's an example of a high-level migration plan. 1. Take inventory of what Bing Maps SDKs and services your application is using and verify that Azure Maps provides alternative SDKs and services for you to migrate to.
-2. Create an Azure subscription (if you donΓÇÖt already have one) at <https://azure.com>.
+2. Create an Azure subscription (if you donΓÇÖt already have one) at [azure.com](https://azure.com).
3. Create an Azure Maps account ([documentation](./how-to-manage-account-keys.md)) and authentication key or Azure Active Directory ([documentation](./how-to-manage-authentication.md)). 4. Migrate your application code.
Here is a list of useful technical resources for Azure Maps.
## Migration support
-Developers can seek migration support through the [forums](/answers/topics/azure-maps.html) or through one of the many Azure support options: <https://azure.microsoft.com/support/options/>
+Developers can seek migration support through the [forums](/answers/topics/azure-maps.html) or through one of the many [Azure support options](https://azure.microsoft.com/support/options/).
## New terminology
azure-maps Web Sdk Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md
If your data meets one of the following criteria, be sure to specify the min and
* If the data is coming from a vector tile source, often source layers for different data types are only available through a range of zoom levels. * If using a tile layer that doesn't have tiles for all zoom levels 0 through 24 and you want it to only rendering at the levels it has tiles, and not try to fill in missing tiles with tiles from other zoom levels. * If you only want to render a layer at certain zoom levels.
-All layers have a `minZoom` and `maxZoom` option where the layer will be rendered when between these zoom levels based on this logic ` maxZoom > zoom >= minZoom`.
+All layers have a `minZoom` and `maxZoom` option where the layer will be rendered when between these zoom levels based on this logic `maxZoom > zoom >= minZoom`.
**Example**
var layer = new atlas.layer.HeatMapLayer(source, null, {
}); ```
-Learn more in the [Clustering and heat maps in this document](clustering-point-data-web-sdk.md #clustering-and-the-heat-maps-layer)
+Learn more in the [Clustering and heat maps in this document](clustering-point-data-web-sdk.md#clustering-and-the-heat-maps-layer)
### Keep image resources small
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
description: Common issues with Azure Monitor metric alerts and possible solutio
Previously updated : 5/25/2022 Last updated : 6/23/2022 ms:reviwer: harelbr # Troubleshooting problems in Azure Monitor metric alerts
Metric alerts are stateful by default, and therefore additional alerts are not f
- If you're creating the alert rule programmatically (for example, via [Resource Manager](./alerts-metric-create-templates.md), [PowerShell](/powershell/module/az.monitor/), [REST](/rest/api/monitor/metricalerts/createorupdate), [CLI](/cli/azure/monitor/metrics/alert)), set the *autoMitigate* property to 'False'. - If you're creating the alert rule via the Azure portal, uncheck the 'Automatically resolve alerts' option (available under the 'Alert rule details' section).
-<sup>1</sup> For stateless metric alert rules, an alert will trigger once every 5 minutes at a minimum, even if the frequency of evaluation is equal or less than 5 minutes and the condition is still being met.
+<sup>1</sup> For stateless metric alert rules, an alert will trigger once every 10 minutes at a minimum, even if the frequency of evaluation is equal or less than 5 minutes and the condition is still being met.
> [!NOTE] > Making a metric alert rule stateless prevents fired alerts from becoming resolved, so even after the condition isnΓÇÖt met anymore, the fired alerts will remain in a fired state until the 30 days retention period.
The table below lists the metrics that aren't supported by dynamic thresholds.
## Next steps -- For general troubleshooting information about alerts and notifications, see [Troubleshooting problems in Azure Monitor alerts](alerts-troubleshoot.md).
+- For general troubleshooting information about alerts and notifications, see [Troubleshooting problems in Azure Monitor alerts](alerts-troubleshoot.md).
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Enabling monitoring on your ASP.NET Core based web applications running on [Azur
[Trim self-contained deployments](/dotnet/core/deploying/trimming/trim-self-contained) is **not supported**. Use [manual instrumentation](./asp-net-core.md) via code instead.
-See the [enable monitoring section](#enable-monitoring ) below to begin setting up Application Insights with your App Service resource.
+See the [enable monitoring section](#enable-monitoring) below to begin setting up Application Insights with your App Service resource.
# [Linux](#tab/Linux)
See the [enable monitoring section](#enable-monitoring ) below to begin setting
[Trim self-contained deployments](/dotnet/core/deploying/trimming/trim-self-contained) is **not supported**. Use [manual instrumentation](./asp-net-core.md) via code instead.
-See the [enable monitoring section](#enable-monitoring ) below to begin setting up Application Insights with your App Service resource.
+See the [enable monitoring section](#enable-monitoring) below to begin setting up Application Insights with your App Service resource.
azure-monitor Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/correlation.md
Add the following configuration:
distributedTracingMode: 2 // DistributedTracingModes.W3C ``` > [!IMPORTANT]
-> To see all configurations required to enable correlation, see the [JavaScript correlation documentation](./javascript.md#enable-correlation).
+> To see all configurations required to enable correlation, see the [JavaScript correlation documentation](./javascript.md#enable-distributed-tracing).
## Telemetry correlation in OpenCensus Python
azure-monitor Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing.md
The Application Insights agents and/or SDKs for .NET, .NET Core, Java, Node.js,
* [.NET Core](asp-net-core.md) * [Java](./java-in-process-agent.md) * [Node.js](../app/nodejs.md)
-* [JavaScript](./javascript.md#enable-correlation)
+* [JavaScript](./javascript.md#enable-distributed-tracing)
* [Python](opencensus-python.md) With the proper Application Insights SDK installed and configured, tracing information is automatically collected for popular frameworks, libraries, and technologies by SDK dependency auto-collectors. The full list of supported technologies is available in [the Dependency auto-collection documentation](./auto-collect-dependencies.md).
azure-monitor Java 2X Micrometer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-micrometer.md
Steps:
To learn more about metrics, refer to the [Micrometer documentation](https://micrometer.io/docs/).
-Other sample code on how to create different types of metrics can be found in[the official Micrometer GitHub repo](https://github.com/micrometer-metrics/micrometer/tree/master/samples/micrometer-samples-core/src/main/java/io/micrometer/core/samples).
+Other sample code on how to create different types of metrics can be found in the [official Micrometer GitHub repo](https://github.com/micrometer-metrics/micrometer/tree/master/samples/micrometer-samples-core/src/main/java/io/micrometer/core/samples).
## How to bind additional metrics collection
azure-monitor Javascript Angular Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-angular-plugin.md
export class AppModule { }
Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
-In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-correlation).
+In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing).
### Route tracking
azure-monitor Javascript Click Analytics Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-click-analytics-plugin.md
appInsights.loadAppInsights();
Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
-In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-correlation).
+In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing).
## Sample app
azure-monitor Javascript React Native Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-native-plugin.md
appInsights.loadAppInsights();
Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
-In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-correlation).
+In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing).
### PageView
azure-monitor Javascript React Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-plugin.md
The `AppInsightsErrorBoundary` requires two props to be passed to it, the `React
Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
-In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-correlation).
+In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing).
### Route tracking
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
Cookie Configuration for instance-based cookie management added in version 2.6.0
By setting `autoTrackPageVisitTime: true`, the time in milliseconds a user spends on each page is tracked. On each new PageView, the duration the user spent on the *previous* page is sent as a [custom metric](../essentials/metrics-custom-overview.md) named `PageVisitTime`. This custom metric is viewable in the [Metrics Explorer](../essentials/metrics-getting-started.md) as a "log-based metric".
-## Enable Correlation
+## Enable Distributed Tracing
Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
See the section below on filtering data with transformations for a summary on wh
### Multi-homing agents You should be cautious with any configuration using multi-homed agents where a single virtual machine sends data to multiple workspaces since you may be incurring charges for the same data multiple times. If you do multi-home agents, ensure that you're sending unique data to each workspace.
-You can also collect duplicate data with a single virtual machine running both the Azure Monitor agent and Log Analytics agent, even if they're both sending data to the same workspace. While the agents can coexist, each works independently without any knowledge of the other. You should continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](logs/../agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each are collecting unique data.
+You can also collect duplicate data with a single virtual machine running both the Azure Monitor agent and Log Analytics agent, even if they're both sending data to the same workspace. While the agents can coexist, each works independently without any knowledge of the other. You should continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](./agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each are collecting unique data.
See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to ensure that you aren't collecting duplicate data for the same machine.
There are multiple methods that you can use to limit the amount of data collecte
* **Sampling**: [Sampling](app/sampling.md) is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics.
-* **Limit Ajax calls**: [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. Note that disabling Ajax calls will disable [JavaScript correlation](app/javascript.md#enable-correlation).
+* **Limit Ajax calls**: [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. Note that disabling Ajax calls will disable [JavaScript correlation](app/javascript.md#enable-distributed-tracing).
* **Disable unneeded modules**: [Edit ApplicationInsights.config](app/configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required.
azure-monitor Azure Networking Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-networking-analytics.md
The following table shows data collection methods and other details about how da
![Screenshot of the Diagnostics Settings config for Application Gateway resource.](media/azure-networking-analytics/diagnostic-settings-1.png)
- [ ![Screenshot of the page for configuring Diagnostics settings.](media/azure-networking-analytics/diagnostic-settings-2.png)](media/azure-networking-analytics/application-gateway-diagnostics-2.png#lightbox)
+ [![Screenshot of the page for configuring Diagnostics settings.](media/azure-networking-analytics/diagnostic-settings-2.png)](media/azure-networking-analytics/application-gateway-diagnostics-2.png#lightbox)
5. Click the checkbox for *Send to Log Analytics*. 6. Select an existing Log Analytics workspace, or create a workspace.
Set-AzDiagnosticSetting -ResourceId $gateway.ResourceId -WorkspaceId $workspace
Application insights can be accessed via the insights tab within your Application Gateway resource.
-![Screenshot of Application Gateway insights ](media/azure-networking-analytics/azure-appgw-insights.png
-)
+![Screenshot of Application Gateway insights](media/azure-networking-analytics/azure-appgw-insights.png)
The "view detailed metrics" tab will open up the pre-populated workbook summarizing the data from your Application Gateway.
-[ ![Screenshot of Application Gateway workbook ](media/azure-networking-analytics/azure-appgw-workbook.png)](media/azure-networking-analytics/application-gateway-workbook.png#lightbox)
+[![Screenshot of Application Gateway workbook](media/azure-networking-analytics/azure-appgw-workbook.png)](media/azure-networking-analytics/application-gateway-workbook.png#lightbox)
### New capabilities with Azure Monitor Network Insights workbook
To find more information about the capabilities of the new workbook solution che
> [!NOTE] > All past data is already available within the workbook from the point diagnostic settings were originally enabled. There is no data transfer required.
-2. Access the [default insights workbook](#accessing-azure-application-gateway-analytics-via-azure-monitor-network-insights) for your Application Gateway resource. All existing insights supported by the Application Gateway analytics solution will be already present in the workbook. You can extend this by adding custom [visualizations](../visualize/workbooks-overview.md#visualizations) based on metric & log data.
+2. Access the [default insights workbook](#accessing-azure-application-gateway-analytics-via-azure-monitor-network-insights) for your Application Gateway resource. All existing insights supported by the Application Gateway analytics solution will be already present in the workbook. You can extend this by adding custom [visualizations](../visualize/workbooks-overview.md#visualizations) based on metric and log data.
3. After you are able to see all your metric and log insights, to clean up the Azure Gateway analytics solution from your workspace, you can delete the solution from the solution resource page.
-[ ![Screenshot of the delete option for Azure Application Gateway analytics solution.](media/azure-networking-analytics/azure-appgw-analytics-delete.png)](media/azure-networking-analytics/application-gateway-analytics-delete.png#lightbox)
+[![Screenshot of the delete option for Azure Application Gateway analytics solution.](media/azure-networking-analytics/azure-appgw-analytics-delete.png)](media/azure-networking-analytics/application-gateway-analytics-delete.png#lightbox)
## Troubleshooting
azure-monitor Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/authentication-authorization.md
Before beginning, make sure you have all the values required to make OAuth2 call
### Client Credentials Flow In the client credentials flow, the token is used with the ARM endpoint. A single request is made to receive a token, using the application permissions provided during the Azure AD application setup.
-The resource requested is: <https://management.azure.com/>.
+The resource requested is: `https://management.azure.com`.
You can also use this flow to request a token to `https://api.loganalytics.io`. Replace the "resource" in the example. #### Client Credentials Token URL (POST request)
azure-monitor Batch Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/batch-queries.md
The Azure Monitor Log Analytics API supports batching queries together. Batch queries currently require Azure AD authentication. ## Request format
-To batch queries, use the API endpoint, adding $batch at the end of the URL: <https://api.loganalytics.io/v1/$batch>.
+To batch queries, use the API endpoint, adding $batch at the end of the URL: `https://api.loganalytics.io/v1/$batch`.
If no method is included, batching defaults to the GET method. On GET requests, the API ignores the body parameter of the request object.
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
The default pricing for Log Analytics is a Pay-As-You-Go model that's based on i
Data volume is measured as the size of the data that will be stored in GB (10^9 bytes). The data size of a single record is calculated from a string representation of the columns that are stored in the Log Analytics workspace for that record, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any custom columns added by the [custom logs API](custom-logs-overview.md), [ingestion-time transformations](ingestion-time-transformations.md), or [custom fields](custom-fields.md) that are added as data is collected and then stored in the workspace. >[!NOTE]
->The billable data volume calculation is substantially smaller than the size of the entire incoming JSON-packaged event, often less than 50% for small events. It is essential to understand this calculation of billed data size when estimating costs and comparing to other pricing models.
+>The billable data volume calculation is substantially smaller than the size of the entire incoming JSON-packaged event. On average across all event types, the billed size is about 25% less than the incoming data size. This can be up to 50% for small events. It is essential to understand this calculation of billed data size when estimating costs and comparing to other pricing models.
### Excluded columns The following [standard columns](log-standard-columns.md) that are common to all tables, are excluded in the calculation of the record size. All other columns stored in Log Analytics are included in the calculation of the record size.
azure-monitor Workbooks Add Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-add-text.md
- Title: Azure Workbooks text parameters
-description: Learn about adding text parameters to your Azure workbook.
---- Previously updated : 05/30/2022---
-# Adding text to your workbook
-
-Workbooks allow authors to include text blocks in their workbooks. The text can be human analysis of the telemetry, information to help users interpret the data, section headings, etc.
-
- :::image type="content" source="media/workbooks-add-text/workbooks-text-example.png" alt-text="Screenshot of adding text to a workbook.":::
-
-Text is added through a markdown control - into which an author can add their content. An author can use the full formatting capabilities of markdown to make their documents appear just how they want it. These include different heading and font styles, hyperlinks, tables, etc. This allows authors to create rich Word- or Portal-like reports or analytic narratives. Text Steps can contain parameter values in the markdown text, and those parameter references will be updated as the parameters change.
-
-**Edit mode**:
- :::image type="content" source="media/workbooks-add-text/workbooks-text-control-edit-mode.png" alt-text="Screenshot showing adding text to a workbook in edit mode.":::
-
-**Preview mode**:
- :::image type="content" source="media/workbooks-add-text/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot showing adding text to a workbook in preview mode.":::
-
-## Add text
-1. Switch the workbook to edit mode by clicking on the _Edit_ toolbar item.
-1. Use the _Add_ button below a step or at the bottom of the workbook, and choose "Add Text" to add a text control to the workbook.
-1. Enter markdown text into the editor field
-1. Use the _Text Style_ option to switch between plain markdown, and markdown wrapped with the Azure portal's standard info/warning/success/error styling.
-
- > [!TIP]
- > Use this [markdown cheat sheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) to see the different formatting options.
-
-1. Use the Preview tab to see how your content will look. While editing, the preview will show the content inside a scrollable area to limit its size, but when displayed at runtime, the markdown content will expand to fill whatever space it needs, with no scrollbars.
-1. Select the _Done Editing_ button to complete editing the step
-
-## Text styles
-The following text styles are available for text steps:
-
-| Style | Description |
-| | |
-| `plain` | No other formatting is applied |
-| `info` | The portal's "info" style, with a `ℹ` or similar icon and blue background |
-| `error` | The portal's "error" style, with a `❌` or similar icon and red background |
-| `success` | The portal's "success" style, with a `Γ£ö` or similar icon and green background |
-| `upsell` | The portal's "upsell" style, with a `🚀` or similar icon and purple background |
-| `warning` | The portal's "warning" style, with a `ΓÜá` or similar icon and blue background |
--
-Instead of picking a specific style, you may also choose a text parameter as the source of the style. The parameter value must be one of the above text values. The absence of a value, or any unrecognized value will be treated as `plain` style.
-
-### info style example:
- :::image type="content" source="media/workbooks-add-text/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot of adding text to a workbook in preview mode showing info style.":::
-
-### warning style example:
- :::image type="content" source="media/workbooks-add-text/workbooks-text-example-warning.png" alt-text="Screenshot of a text visualization in warning style.":::
-
-## Next Steps
-- [Add Workbook parameters](workbooks-parameters.md)
azure-monitor Workbooks Combine Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-combine-data.md
- Title: Combine data from different sources in your Azure Workbook
-description: Learn how to combine data from different sources in your Azure Workbook.
---- Previously updated : 05/30/2022---
-# Combine data from different sources
-
-It is often necessary to bring together data from different sources that enhance the insights experience. An example is augmenting active alert information with related metric data. This allows users to see not just the effect (an active alert), but also potential causes (for example, high CPU usage). The monitoring domain has numerous such correlatable data sources that are often critical to the triage and diagnostic workflow.
-
-Workbooks allow not just the querying of different data sources, but also provides simple controls that allow you to merge or join the data to provide rich insights. The `merge` control is the way to achieve it.
-
-## Combining alerting data with Log Analytics VM performance data
-
-The example below combines alerting data with Log Analytics VM performance data to get a rich insights grid.
-
-![Screenshot of a workbook with a merge control that combines alert and log analytics data.](./media/workbooks-data-sources/merge-control.png)
-
-## Using merge control to combine Azure Resource Graph and Log Analytics data
-
-Here is a tutorial on using the merge control to combine Azure Resource Graph and Log Analytics data:
-
-[![Combining data from different sources in workbooks](https://img.youtube.com/vi/7nWP_YRzxHg/0.jpg)](https://www.youtube.com/watch?v=7nWP_YRzxHg "Video showing how to combine data from different sources in workbooks.")
-
-Workbooks support these merges:
-
-* Inner unique join
-* Full inner join
-* Full outer join
-* Left outer join
-* Right outer join
-* Left semi-join
-* Right semi-join
-* Left anti-join
-* Right anti-join
-* Union
-* Duplicate table
-
-## Next steps
azure-monitor Workbooks Create Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-create-workbook.md
Title: Create an Azure Workbook
+ Title: Creating an Azure Workbook
description: Learn how to create an Azure Workbook.
Last updated 05/30/2022
-# Create an Azure Workbook
+# Creating an Azure Workbook
+This article describes how to create a new workbook and how to add elements to your Azure Workbook.
-This video provides a walkthrough of creating workbooks.
+This video walks you through creating workbooks.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4B4Ap]
-## To create a new Azure Workbook
+## Create a new Azure Workbook
To create a new Azure workbook: 1. From the Azure Workbooks page, select an empty template or select **New** in the top toolbar.
-1. Combine any of these steps to include the elements you want to the workbook:
- - [Add text to your workbook](workbooks-add-text.md)
- - [Add parameters to your workbook](workbooks-parameters.md)
- - Add queries to your workbook
- - [Combine data from different sources](workbooks-combine-data.md)
- - Add Metrics to your workbook
- - Add Links to your workbook
- - Add Groups to your workbook
- - Add more configuration options to your workbook
--
-## Next steps
-- [Getting started with Azure Workbooks](workbooks-getting-started.md).-- [Azure workbooks data sources](workbooks-data-sources.md).
+1. Combine any of these elements to add to your workbook:
+ - [Text](#adding-text)
+ - Parameters
+ - [Queries](#adding-queries)
+ - [Metric charts](#adding-metric-charts)
+ - [Links](#adding-links)
+ - Groups
+ - Configuration options
+
+## Adding text
+
+Workbooks allow authors to include text blocks in their workbooks. The text can be human analysis of the telemetry, information to help users interpret the data, section headings, etc.
+
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-example.png" alt-text="Screenshot of adding text to a workbook.":::
+
+Text is added through a markdown control into which an author can add their content. An author can use the full formatting capabilities of markdown. These include different heading and font styles, hyperlinks, tables, etc. Markdown allows authors to create rich Word- or Portal-like reports or analytic narratives. Text can contain parameter values in the markdown text, and those parameter references will be updated as the parameters change.
+
+**Edit mode**:
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode.png" alt-text="Screenshot showing adding text to a workbook in edit mode.":::
+
+**Preview mode**:
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot showing adding text to a workbook in preview mode.":::
+
+### Add text to an Azure workbook
+1. Make sure you are in **Edit** mode. Add a query by doing any one of the following:
+ - Select **Add**, and **Add text** below an existing element, or at the bottom of the workbook.
+ - Select the ellipses (...) to the right of the **Edit** button next to one of the elements in the workbook, then select **Add** and then **Add text**.
+1. Enter markdown text into the editor field.
+1. Use the **Text Style** option to switch between plain markdown, and markdown wrapped with the Azure portal's standard info/warning/success/error styling.
+
+ > [!TIP]
+ > Use this [markdown cheat sheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) to see the different formatting options.
+
+1. Use the **Preview** tab to see how your content will look. The preview shows the content inside a scrollable area to limit its size, but when displayed at runtime, the markdown content will expand to fill whatever space it needs, without a scrollbar.
+1. Select **Done Editing**.
+
+### Text styles
+These text styles are available:
+
+| Style | Description |
+| | |
+| plain| No formatting is applied |
+|info| The portal's "info" style, with a `ℹ` or similar icon and blue background |
+|error| The portal's "error" style, with a `❌` or similar icon and red background |
+|success| The portal's "success" style, with a `Γ£ö` or similar icon and green background |
+|upsell| The portal's "upsell" style, with a `🚀` or similar icon and purple background |
+|warning| The portal's "warning" style, with a `ΓÜá` or similar icon and blue background |
++
+You can also choose a text parameter as the source of the style. The parameter value must be one of the above text values. The absence of a value, or any unrecognized value will be treated as `plain` style.
+
+### Text style examples
+
+**Info style example**:
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot of adding text to a workbook in preview mode showing info style.":::
+
+**Warning style example**:
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-example-warning.png" alt-text="Screenshot of a text visualization in warning style.":::
+
+## Adding queries
+
+Azure Workbooks allow you to query any of the supported workbook [data sources](workbooks-data-sources.md).
+
+For example, you can query Azure Resource Health to help you view any service problems affecting your resources, or you can query Azure Monitor Metrics, which is numeric data collected at regular intervals and provide information about an aspect of a system at a particular time.
+
+### Add a query to an Azure Workbook
+
+1. Make sure you are in **Edit** mode. Add a query by doing any one of the following:
+ - Select **Add**, and **Add query** below an existing element, or at the bottom of the workbook.
+ - Select the ellipses (...) to the right of the **Edit** button next to one of the elements in the workbook, then select **Add** and then **Add query**.
+1. Select the [data source](workbooks-data-sources.md) for your query. The other fields are determined based on the data source you choose.
+1. Select any other values that are required based on the data source you selected.
+1. Select the [visualization](workbooks-visualizations.md) for your workbook.
+1. In the query section, enter your query, or select from a list of sample queries by selecting **Samples**, and then edit the query to your liking.
+1. Select **Run Query**.
+1. When you are sure you have the query you want in your workbook, select **Done editing**.
+
+## Adding metric charts
+
+Most Azure resources emit metric data about state and health such as CPU utilization, storage availability, count of database transactions, failing app requests, etc. Workbooks allow the visualization of this data as time-series charts.
+
+The example below shows the number of transactions in a storage account over the prior hour. This allows the storage owner to see the transaction trend and look for anomalies in behavior.
++
+### Add a metric chart to an Azure Workbook
+1. Make sure you are in **Edit** mode. Add a query by doing any one of the following:
+ - Select **Add**, and **Add metric** below an existing element, or at the bottom of the workbook.
+ - Select the ellipses (...) to the right of the **Edit** button next to one of the elements in the workbook, then select **Add** and then **Add metric**.
+3. Select a resource type (for example, Storage Account), the resources to target, the metric namespace and name, and the aggregation to use.
+4. Set other parameters if needed - like time range, split-by, visualization, size and color palette.
+
+Here is the edit mode version of the metric chart above:
++
+### Metric chart parameters
+
+| Parameter | Explanation | Example |
+| - |:-|:-|
+| Resource Type| The resource type to target | Storage or Virtual Machine. |
+| Resources| A set of resources to get the metrics value from | MyStorage1 |
+| Namespace | The namespace with the metric | Storage > Blob |
+| Metric| The metric to visualize | Storage > Blob > Transactions |
+| Aggregation | The aggregation function to apply to the metric | Sum, Count, Average, etc. |
+| Time Range | The time window to view the metric in | Last hour, Last 24 hours, etc. |
+| Visualization | The visualization to use | Area, Bar, Line, Scatter, Grid |
+| Split By| Optionally split the metric on a dimension | Transactions by Geo type |
+| Size | The vertical size of the control | Small, medium or large |
+| Color palette | The color palette to use in the chart. Ignored if the `Split by` parameter is used | Blue, green, red, etc. |
+
+### Metric chart examples
+**Transactions split by API name as a line chart**
+++
+**Transactions split by response type as a large bar chart**
++
+**Average latency as a scatter chart**
++
+## Adding links
+
+Authors can use links to create links to other views, workbooks, other items inside a workbook, or to create tabbed views within a workbook. The links can be styled as hyperlinks, buttons, and tabs.
+
+### Link styles
+You can apply styles to the link element itself as well as to individual links.
+
+**Link element styles**
++
+|Style |Sample |Notes |
+||||
+|Bullet List | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-bullet.png" alt-text="Screenshot of bullet style workbook link."::: | The default, links, appears as a bulleted list of links, one on each line. The **Text before link** and **Text after link** fields can be used to add additional text before or after the link items. |
+|List |:::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-list.png" alt-text="Screenshot of list style workbook link."::: | Links appear as a list of links, with no bullets. |
+|Paragraph | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-paragraph.png" alt-text="Screenshot of paragraph style workbook link."::: |Links appear as a paragraph of links, wrapped like a paragraph of text. |
+|Navigation | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-navigation.png" alt-text="Screenshot of navigation style workbook link."::: | Links appear as links, with vertical dividers, or pipes (`|`) between each link. |
+|Tabs | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-tabs.png" alt-text="Screenshot of tabs style workbook link."::: |Links appear as tabs. Each link appears as a tab, no link styling options apply to individual links. See the [tabs](#using-tabs) section below for how to configure tabs. |
+|Toolbar | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-toolbar.png" alt-text="Screenshot of toolbar style workbook link."::: | Links appear an Azure Portal styled toolbar, with icons and text. Each link appears as a toolbar button. See the [toolbar](#using-toolbars) section below for how to configure toolbars. |
++
+**Link styles**
+
+| Style | Description |
+|:- |:-|
+| Link | The default - links appears as a hyperlink. URL links can only be link style. |
+| Button (Primary) | the link appears as a "primary" button in the portal, usually a blue color |
+| Button (Secondary) | the links appear as a "secondary" button in the portal, usually a "transparent" color, a white button in light themes and a dark gray button in dark themes. |
+
+When using buttons, if required parameters are used in Button text, Tooltip text, or Value fields, and the required parameter is unset, the button will be disabled. For example, this can be used to disable buttons when no value is selected in another parameter/control.
+
+### Link actions
+Links can use all of the link actions available in [link actions](workbooks-link-actions.md), and have 2 more available actions:
+
+| Action | Description |
+|:- |:-|
+|Set a parameter value | When selecting a link/button/tab, a parameter can be set to a value. Commonly tabs are configured to set a parameter to a value, which hides and shows other parts of the workbook based on that value |
+|Scroll to a step| When selecting a link, the workbook will move focus and scroll to make another step visible. This action can be use to create a "table of contents", or a "go back to the top" style experience. |
+
+### Using tabs
+
+Most of the time, tab links are combined with the **Set a parameter value** action. Here's an example showing the links step configured to create 2 tabs, where select either tab will set a **selectedTab** parameter to a different value (the example shows a third tab being edited to show the parameter name and parameter value placeholders):
+++
+You can then add other items in the workbook that are conditionally visible if the **selectedTab** parameter value is "1" by using the advanced settings:
++
+When using tabs, the first tab will be selected by default, initially setting **selectedTab** to 1, and making that step visible. Selecting the second tab will change the value of the parameter to "2", and different content will be displayed:
++
+A sample workbook with the above tabs is available in [sample Azure Workbooks with links](workbooks-sample-links.md#sample-workbook-with-links).
+
+### Tabs limitations
+ - When using tabs, URL links are not supported. A URL link in a tab appears as a disabled tab.
+ - When using tabs, no item styling is available. Items will only be displayed as tabs, and only the tab name (link text) field will be displayed. Fields that are not used in tab style are hidden while in edit mode.
+- When using tabs, the first tab will become selected by default, invoking whatever action that tab has specified. If the first tab's action opens another view, that means as soon as the tabs are created, a view appears.
+- While having tabs open other views is *supported*, it should be used sparingly, as most users won't expect selecting a tab to navigate. (Also, if other tabs are setting parameter to a specific value, a tab that opens a view would not change that value, so the rest of the workbook content will continue to show the view/data for the previous tab.)
+
+### Using toolbars
+
+To have your links appear styled as a toolbar, use the Toolbar style. In toolbar style, the author must fill in fields for:
+- Button text, the text to display on the toolbar. Parameters may be used in this field.
+- Icon, the icon to display in the toolbar.
+- Tooltip Text, text to be displayed on the toolbar button's tooltip text. Parameters may be used in this field.
++
+If any required parameters are used in Button text, Tooltip text, or Value fields, and the required parameter is unset, the toolbar button will be disabled. For example, this can be used to disable toolbar buttons when no value is selected in another parameter/control.
+
+A sample workbook with toolbars, globals parameters, and ARM Actions is available in [sample Azure Workbooks with links](workbooks-sample-links.md#sample-workbook-with-toolbar-links).
azure-monitor Workbooks Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-data-sources.md
Workbooks can extract data from these data sources:
- [Metrics](#metrics) - [Azure Resource Graph](#azure-resource-graph) - [Azure Resource Manager](#azure-resource-manager)
+ - [Azure Data Explorer](#azure-data-explorer)
- [JSON](#json)
+ - [Merge](#merge)
- [Custom endpoint](#custom-endpoint)
+ - [Workload health](#workload-health)
+ - [Azure resource health](#azure-resource-health)
- [Azure RBAC](#azure-rbac)-
+ - [Change Analysis (preview)](#change-analysis-preview)
## Logs Workbooks allow querying logs from the following sources:
Workbooks allow querying logs from the following sources:
Workbook authors can use KQL queries that transform the underlying resource data to select a result set that can visualized as text, charts, or grids.
-![Screenshot of workbooks logs report interface](./media/workbooks-data-sources/logs.png)
+![Screenshot of workbooks logs report interface.](./media/workbooks-data-sources/logs.png)
Workbook authors can easily query across multiple resources creating a truly unified rich reporting experience.
Workbook authors can easily query across multiple resources creating a truly uni
Azure resources emit [metrics](../essentials/data-platform-metrics.md) that can be accessed via workbooks. Metrics can be accessed in workbooks through a specialized control that allows you to specify the target resources, the desired metrics, and their aggregation. This data can then be plotted in charts or grids.
-![Screenshot of workbook metrics charts of cpu utilization](./media/workbooks-data-sources/metrics-graph.png)
+![Screenshot of workbook metrics charts of cpu utilization.](./media/workbooks-data-sources/metrics-graph.png)
-![Screenshot of workbook metrics interface](./media/workbooks-data-sources/metrics.png)
+![Screenshot of workbook metrics interface.](./media/workbooks-data-sources/metrics.png)
## Azure Resource Graph
Workbooks support querying for resources and their metadata using Azure Resource
To make a query control use this data source, use the Query type drop-down to choose Azure Resource Graph and select the subscriptions to target. Use the Query control to add the ARG KQL-subset that selects an interesting resource subset.
-![Screenshot of Azure Resource Graph KQL query](./media/workbooks-data-sources/azure-resource-graph.png)
+![Screenshot of Azure Resource Graph KQL query.](./media/workbooks-data-sources/azure-resource-graph.png)
## Azure Resource Manager
Workbook supports Azure Resource Manager REST operations. This allows the abilit
To make a query control use this data source, use the Data source drop-down to choose Azure Resource Manager. Provide the appropriate parameters such as Http method, url path, headers, url parameters and/or body. > [!NOTE]
-> Only `GET`, `POST`, and `HEAD` operations are currently supported.
+> Only GET, POST, and HEAD operations are currently supported.
## Azure Data Explorer Workbooks now have support for querying from [Azure Data Explorer](/azure/data-explorer/) clusters with the powerful [Kusto](/azure/kusto/query/index) query language. For the **Cluster Name** field, you should add the region name following the cluster name. For example: *mycluster.westeurope*.
-![Screenshot of Kusto query window](./media/workbooks-data-sources/data-explorer.png)
+![Screenshot of Kusto query window.](./media/workbooks-data-sources/data-explorer.png)
-## Workload health
+## JSON
-Azure Monitor has functionality that proactively monitors the availability and performance of Windows or Linux guest operating systems. Azure Monitor models key components and their relationships, criteria for how to measure the health of those components, and which components alert you when an unhealthy condition is detected. Workbooks allow users to use this information to create rich interactive reports.
+The JSON provider allows you to create a query result from static JSON content. It is most commonly used in Parameters to create dropdown parameters of static values. Simple JSON arrays or objects will automatically be converted into grid rows and columns. For more specific behaviors, you can use the Results tab and JSONPath settings to configure columns.
-To make a query control use this data source, use the **Query type** drop-down to choose Workload Health and select subscription, resource group or VM resources to target. Use the health filter drop downs to select an interesting subset of health incidents for your analytic needs.
+> [!NOTE]
+> Do not include any sensitive information in any fields (headers, parameters, body, url), since they will be visible to all of the Workbook users.
-![Screenshot of alerts query](./media/workbooks-data-sources/workload-health.png)
+This provider supports [JSONPath](workbooks-jsonpath.md).
-## Azure resource health
+## Merge
-Workbooks support getting Azure resource health and combining it with other data sources to create rich, interactive health reports
+Merging data from different sources can enhance the insights experience. An example is augmenting active alert information with related metric data. This allows users to see not just the effect (an active alert), but also potential causes (for example, high CPU usage). The monitoring domain has numerous such correlatable data sources that are often critical to the triage and diagnostic workflow.
-To make a query control use this data source, use the **Query type** drop-down to choose Azure health and select the resources to target. Use the health filter drop downs to select an interesting subset of resource issues for your analytic needs.
+Workbooks allow not just the querying of different data sources, but also provides simple controls that allow you to merge or join the data to provide rich insights. The **merge** control is the way to achieve it.
-![Screenshot of alerts query that shows the health filter lists.](./media/workbooks-data-sources/resource-health.png)
+### Combining alerting data with Log Analytics VM performance data
-## Change Analysis (preview)
+The example below combines alerting data with Log Analytics VM performance data to get a rich insights grid.
-To make a query control using [Application Change Analysis](../app/change-analysis.md) as the data source, use the **Data source** drop-down and choose *Change Analysis (preview)* and select a single resource. Changes for up to the last 14 days can be shown. The *Level* drop-down can be used to filter between "Important", "Normal", and "Noisy" changes, and this drop down supports workbook parameters of type [drop down](workbooks-dropdowns.md).
+![Screenshot of a workbook with a merge control that combines alert and log analytics data.](./media/workbooks-data-sources/merge-control.png)
-> [!div class="mx-imgBorder"]
-> ![A screenshot of a workbook with Change Analysis](./media/workbooks-data-sources/change-analysis-data-source.png)
+### Using merge control to combine Azure Resource Graph and Log Analytics data
-## JSON
+Here is a tutorial on using the merge control to combine Azure Resource Graph and Log Analytics data:
-The JSON provider allows you to create a query result from static JSON content. It is most commonly used in Parameters to create dropdown parameters of static values. Simple JSON arrays or objects will automatically be converted into grid rows and columns. For more specific behaviors, you can use the Results tab and JSONPath settings to configure columns.
+[![Combining data from different sources in workbooks](https://img.youtube.com/vi/7nWP_YRzxHg/0.jpg)](https://www.youtube.com/watch?v=7nWP_YRzxHg "Video showing how to combine data from different sources in workbooks.")
-> [!NOTE]
-> Do not include any sensitive information in any fields (`headers`, `parameters`, `body`, `url`), since they will be visible to all of the Workbook users.
+Workbooks support these merges:
-This provider supports [JSONPath](workbooks-jsonpath.md).
+* Inner unique join
+* Full inner join
+* Full outer join
+* Left outer join
+* Right outer join
+* Left semi-join
+* Right semi-join
+* Left anti-join
+* Right anti-join
+* Union
+* Duplicate table
## Custom endpoint Workbooks support getting data from any external source. If your data lives outside Azure you can bring it to Workbooks by using this data source type.
-To make a query control use this data source, use the _Data source_ drop-down to choose _Custom Endpoint_. Provide the appropriate parameters such as `Http method`, `url`, `headers`, `url parameters` and/or `body`. Make sure your data source supports [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) otherwise the request will fail.
+To make a query control use this data source, use the **Data source** drop-down to choose **Custom Endpoint**. Provide the appropriate parameters such as **Http method**, **url**, **headers**, **url parameters**, and/or **body**. Make sure your data source supports [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) otherwise the request will fail.
-To avoid automatically making calls to untrusted hosts when using templates, the user needs to mark the used hosts as trusted. This can be done by either clicking on the _Add as trusted_ button, or by adding it as a trusted host in Workbook settings. These settings will be saved in [browsers that support IndexDb with web workers](https://caniuse.com/#feat=indexeddb).
+To avoid automatically making calls to untrusted hosts when using templates, the user needs to mark the used hosts as trusted. This can be done by either selecting the **Add as trusted** button, or by adding it as a trusted host in Workbook settings. These settings will be saved in [browsers that support IndexDb with web workers](https://caniuse.com/#feat=indexeddb).
This provider supports [JSONPath](workbooks-jsonpath.md).
+## Workload health
+
+Azure Monitor has functionality that proactively monitors the availability and performance of Windows or Linux guest operating systems. Azure Monitor models key components and their relationships, criteria for how to measure the health of those components, and which components alert you when an unhealthy condition is detected. Workbooks allow users to use this information to create rich interactive reports.
+
+To make a query control use this data source, use the **Query type** drop-down to choose Workload Health and select subscription, resource group or VM resources to target. Use the health filter drop downs to select an interesting subset of health incidents for your analytic needs.
+
+![Screenshot of alerts query.](./media/workbooks-data-sources/workload-health.png)
+
+## Azure resource health
+
+Workbooks support getting Azure resource health and combining it with other data sources to create rich, interactive health reports
+
+To make a query control use this data source, use the **Query type** drop-down to choose Azure health and select the resources to target. Use the health filter drop downs to select an interesting subset of resource issues for your analytic needs.
+
+![Screenshot of alerts query that shows the health filter lists.](./media/workbooks-data-sources/resource-health.png)
## Azure RBAC The Azure RBAC provider allows you to check permissions on resources. It is most commonly used in parameter to check if the correct RBAC are set up. A use case would be to create a parameter to check deployment permission and then notify the user if they don't have deployment permission. Simple JSON arrays or objects will automatically be converted into grid rows and columns or text with a 'hasPermission' column with either true or false. The permission is checked on each resource and then either 'or' or 'and' to get the result. The [operations or actions](../../role-based-access-control/resource-provider-operations.md) can be a string or an array.
The Azure RBAC provider allows you to check permissions on resources. It is most
``` ["Microsoft.Resources/deployments/read","Microsoft.Resources/deployments/write","Microsoft.Resources/deployments/validate/action","Microsoft.Resources/operations/read"] ```+
+## Change Analysis (preview)
+
+To make a query control using [Application Change Analysis](../app/change-analysis.md) as the data source, use the **Data source** drop-down and choose *Change Analysis (preview)* and select a single resource. Changes for up to the last 14 days can be shown. The *Level* drop-down can be used to filter between "Important", "Normal", and "Noisy" changes, and this drop down supports workbook parameters of type [drop down](workbooks-dropdowns.md).
+
+> [!div class="mx-imgBorder"]
+> ![A screenshot of a workbook with Change Analysis.](./media/workbooks-data-sources/change-analysis-data-source.png)
+ ## Next steps - [Getting started with Azure Workbooks](workbooks-getting-started.md)
azure-monitor Workbooks Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-getting-started.md
You can access Workbooks in a few ways:
:::image type="content" source="./media/workbooks-overview/workbooks-menu.png" alt-text="Screenshot of Workbooks icon in the menu."::: -- From a **Log Analytics workspace** page, select the **Workbooks** icon at the top of the page.
+- In a **Log Analytics workspace** page, select the **Workbooks** icon at the top of the page.
:::image type="content" source="media/workbooks-overview/workbooks-log-analytics-icon.png" alt-text="Screenshot of Workbooks icon on Log analytics workspace page."::: The gallery opens. Select a saved workbook or a template from the gallery, or search for the name in the search bar.
-## Start a new workbook
-To start a new workbook, select the **Empty** template under **Quick start**, or the **New** icon in the top navigation bar. For more information on creating new workbooks, see [Create a workbook](workbooks-create-workbook.md).
- ## Save a workbook To save a workbook, save the report with a specific title, subscription, resource group, and location. The workbook will autofill to the same settings as the LA workspace, with the same subscription, resource group, however, users may change these report settings. Workbooks are shared resources that require write access to the parent resource group to be saved.
azure-monitor Workbooks Grid Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-grid-visualizations.md
Title: Azure Monitor workbook grid visualizations description: Learn about all the Azure Monitor workbook grid visualizations. - Previously updated : 09/04/2020 Last updated : 06/22/2022+ # Grid visualizations
The example below shows a grid that combines icons, heatmaps, and spark-bars to
## Adding a log-based grid 1. Switch the workbook to edit mode by clicking on the **Edit** toolbar item.
-2. Use the **Add query** link to add a log query control to the workbook.
+2. Select **Add query** to add a log query control to the workbook.
3. Select the query type as **Log**, resource type (for example, Application Insights) and the resources to target. 4. Use the Query editor to enter the KQL for your analysis (for example, VMs with memory below a threshold) 5. Set the visualization to **Grid**
The example below shows a grid that combines icons, heatmaps, and spark-bars to
| Parameter | Explanation | Example | | - |:-|:-|
-| `Query Type` | The type of query to use. | Log, Azure Resource Graph, etc. |
-| `Resource Type` | The resource type to target. | Application Insights, Log Analytics, or Azure-first |
-| `Resources` | A set of resources to get the metrics value from. | MyApp1 |
-| `Time Range` | The time window to view the log chart. | Last hour, Last 24 hours, etc. |
-| `Visualization` | The visualization to use. | Grid |
-| `Size` | The vertical size of the control. | Small, medium, large, or full |
-| `Query` | Any KQL query that returns data in the format expected by the chart visualization. | _requests \| summarize Requests = count() by name_ |
+|Query Type| The type of query to use. | Log, Azure Resource Graph, etc. |
+|Resource Type| The resource type to target. | Application Insights, Log Analytics, or Azure-first |
+|Resources| A set of resources to get the metrics value from. | MyApp1 |
+|Time Range| The time window to view the log chart. | Last hour, Last 24 hours, etc. |
+|Visualization| The visualization to use. | Grid |
+|Size| The vertical size of the control. | Small, medium, large, or full |
+|Query| Any KQL query that returns data in the format expected by the chart visualization. | _requests \| summarize Requests = count() by name_ |
## Simple Grid
Here is the same grid styled as bars:
### Styling a grid column 1. Select the **Column Setting** button on the query control toolbar.
-2. In the *Edit column settings*, select the column to style.
+2. In the **Edit column settings**, select the column to style.
3. Choose a column renderer (for example heatmap, bar, bar underneath, etc.) and related settings to style your column.
-Below is an example that styles the *Request* column as a bar:
+Below is an example that styles the **Request** column as a bar:
[![Screenshot of a log based grid with request column styled as a bar.](./media/workbooks-grid-visualizations/log-chart-grid-column-settings-start.png)](./media/workbooks-grid-visualizations/log-chart-grid-column-settings-start.png#lightbox)
-### Column renderers
-
-| Column Renderer | Explanation | Additional Options |
-|:- |:-|:-|
-| `Automatic` | The default - uses the most appropriate renderer based on the column type. | |
-| `Text` | Renders the column values as text. | |
-| `Right Aligned` | Similar to text except that it is right aligned. | |
-| `Date/Time` | Renders a readable date time string. | |
-| `Heatmap` | Colors the grid cells based on the value of the cell. | Color palette and min/max value used for scaling. |
-| `Bar` | Renders a bar next to the cell based on the value of the cell. | Color palette and min/max value used for scaling. |
-| `Bar underneath` | Renders a bar near the bottom of the cell based on the value of the cell. | Color palette and min/max value used for scaling. |
-| `Composite bar` | Renders a composite bar using the specified columns in that row. Refer [Composite Bar](workbooks-composite-bar.md) for details. | Columns with corresponding colors to render the bar and a label to display at the top of the bar. |
-| `Spark bars` | Renders a spark bar in the cell based on the values of a dynamic array in the cell. For example, the Trend column from the screenshot at the top. | Color palette and min/max value used for scaling. |
-| `Spark lines` | Renders a spark line in the cell based on the values of a dynamic array in the cell. | Color palette and min/max value used for scaling. |
-| `Icon` | Renders icons based on the text values in the cell. Supported values include: `cancelled`, `critical`, `disabled`, `error`, `failed`, `info`, `none`, `pending`, `stopped`, `question`, `success`, `unknown`, `warning` `uninitialized`, `resource`, `up`, `down`, `left`, `right`, `trendup`, `trenddown`, `4`, `3`, `2`, `1`, `Sev0`, `Sev1`, `Sev2`, `Sev3`, `Sev4`, `Fired`, `Resolved`, `Available`, `Unavailable`, `Degraded`, `Unknown`, and `Blank`.| |
-| `Link` | Renders a link that when clicked or performs a configurable action. Use this if you *only* want the item to be a link. Any of the other types can *also* be a link by using the `Make this item a link` setting. For more information see [Link Actions](#link-actions) below. | |
-| `Location` | Renders a friendly Azure region name based on a region ids. | |
-| `Resource type` | Renders a friendly resource type string based on a resource type id | |
-| `Resource` | Renders a friendly resource name and link based on a resource id | Option to show the resource type icon |
-| `Resource group` | Renders a friendly resource group name and link based on a resource group id. If the value of the cell is not a resource group, it will be converted to one. | Option to show the resource group icon |
-| `Subscription` | Renders a friendly subscription name and link based on a subscription id. if the value of the cell is not a subscription, it will be converted to one. | Option to show the subscription icon. |
-| `Hidden` | Hides the column in the grid. Useful when the default query returns more columns than needed but a project-away is not desired | |
-
-### Link actions
-
-If the `Link` renderer is selected or the *Make this item a link* checkbox is selected, then the author can configure a link action that will occur on selecting the cell. THis usually is taking the user to some other view with context coming from the cell or may open up a url.
+This usually is taking the user to some other view with context coming from the cell or may open up a url.
### Custom formatting
-Workbooks also allows users to set the number formatting of their cell values. They can do so by clicking on the *Custom formatting* checkbox when available.
+Workbooks also allow users to set the number formatting of their cell values. They can do so by clicking on the **Custom formatting** checkbox when available.
| Formatting option | Explanation | |:- |:-|
-| `Units` | The units for the column - various options for percentage, counts, time, byte, count/time, bytes/time, etc. For example, the unit for a value of 1234 can be set to milliseconds and it's rendered as 1.234 s. |
-| `Style` | The format to render it as - decimal, currency, percent. |
-| `Show group separator` | Checkbox to show group separators. Renders 1234 as 1,234 in the US. |
-| `Minimum integer digits` | Minimum number of integer digits to use (default 1). |
-| `Minimum fractional digits` | Minimum number of fractional digits to use (default 0). |
-| `Maximum fractional digits` | Maximum number of fractional digits to use. |
-| `Minimum significant digits` | Minimum number of significant digits to use (default 1). |
-| `Maximum significant digits` | Maximum number of significant digits to use. |
-| `Custom text for missing values` | When a data point does not have a value, show this custom text instead of a blank. |
+|Units| The units for the column - various options for percentage, counts, time, byte, count/time, bytes/time, etc. For example, the unit for a value of 1234 can be set to milliseconds and it's rendered as 1.234 s. |
+|Style| The format to render it as - decimal, currency, percent. |
+|Show group separator| Checkbox to show group separators. Renders 1234 as 1,234 in the US. |
+|Minimum integer digits| Minimum number of integer digits to use (default 1). |
+|Minimum fractional digits| Minimum number of fractional digits to use (default 0). |
+|Maximum fractional digits| Maximum number of fractional digits to use. |
+|Minimum significant digits| Minimum number of significant digits to use (default 1). |
+|Maximum significant digits| Maximum number of significant digits to use. |
+|Custom text for missing values| When a data point does not have a value, show this custom text instead of a blank. |
### Custom date formatting
When the author has specified that a column is set to the Date/Time renderer, th
| Formatting option | Explanation | |:- |:-|
-| `Style` | The format to render a date as short, long, full formats, or a time as short or long time formats. |
-| `Show time as` | Allows the author to decide between showing the time in local time (default), or as UTC. Depending on the date format style selected, the UTC/time zone information may not be displayed. |
+|Style| The format to render a date as short, long, full formats, or a time as short or long time formats. |
+|Show time as| Allows the author to decide between showing the time in local time (default), or as UTC. Depending on the date format style selected, the UTC/time zone information may not be displayed. |
## Custom column width setting
-The author can customize the width of any column in the grid using the *Custom Column Width* field in *Column Settings*.
+The author can customize the width of any column in the grid using the **Custom Column Width** field in **Column Settings**.
![Screenshot of column settings with the custom column width field indicated in a red box](./media/workbooks-grid-visualizations/custom-column-width-setting.png)
requests
[![Screenshot of a log based grid with a heatmap having a shared scale across columns using the query above.](./media/workbooks-grid-visualizations/log-chart-grid-icons.png)](./media/workbooks-grid-visualizations/log-chart-grid-icons.png#lightbox) Supported icon names include:
-`cancelled`, `critical`, `disabled`, `error`, `failed`, `info`, `none`, `pending`, `stopped`, `question`, `success`, `unknown`, `warning` `uninitialized`, `resource`, `up`, `down`, `left`, `right`, `trendup`, `trenddown`, `4`, `3`, `2`, `1`, `Sev0`, `Sev1`, `Sev2`, `Sev3`, `Sev4`, `Fired`, `Resolved`, `Available`, `Unavailable`, `Degraded`, `Unknown`, and `Blank`.
+- cancelled
+- critical
+- disabled
+- error
+- failed
+- info
+- none
+- pending
+- stopped
+- question
+- success
+- unknown
+- warning
+- uninitialized
+- resource
+- up
+- down
+- left
+- right
+- trendup
+- trenddown
+- 4
+- 3
+- 2
+- 1
+- Sev0
+- Sev1
+- Sev2
+- Sev3
+- Sev4
+- Fired
+- Resolved
+- Available
+- Unavailable
+- Degraded
+- Unknown
+- Blank
-### Using thresholds with links
-
-The instructions below will show you how to use thresholds with links to assign icons and open different workbooks. Each link in the grid will open up a different workbook template for that Application Insights resource.
-
-1. Switch the workbook to edit mode by selecting *Edit* toolbar item.
-2. Select **Add** then *Add query*.
-3. Change the *Data source* to "JSON" and *Visualization* to "Grid".
-4. Enter the following query.
-
-```json
-[
- { "name": "warning", "link": "Community-Workbooks/Performance/Performance Counter Analysis" },
- { "name": "info", "link": "Community-Workbooks/Performance/Performance Insights" },
- { "name": "error", "link": "Community-Workbooks/Performance/Apdex" }
-]
-```
-
-5. Run query.
-6. Select **Column Settings** to open the settings.
-7. Select "name" from *Columns*.
-8. Under *Column renderer*, choose "Thresholds".
-9. Enter and choose the following *Threshold Settings*.
-
- | Operator | Value | Icons |
- |-|||
- | == | warning | Warning |
- | == | error | Failed |
-
- ![Screenshot of Edit column settings tab with the above settings.](./media/workbooks-grid-visualizations/column-settings.png)
-
- Keep the default row as is. You may enter whatever text you like. The Text column takes a String format as an input and populates it with the column value and unit if specified. For example, if warning is the column value the text can be "{0} {1} link!", it will be displayed as "warning link!".
-10. Select the *Make this item a link* box.
- 1. Under *View to open*, choose "Workbook (Template)".
- 2. Under *Link value comes from*, choose "link".
- 3. Select the *Open link in Context Blade* box.
- 4. Choose the following settings in *Workbook Link Settings*
- 1. Under *Template Id comes from*, choose "Column".
- 2. Under *Column* choose "link".
-
- ![Screenshot of link settings with the above settings.](./media/workbooks-grid-visualizations/make-this-item-a-link.png)
-
-11. Select "link" from *Columns*. Under Settings next to *Column renderer*, select **(Hide column)**.
-1. To change the display name of the "name" column select the **Labels** tab. On the row with "name" as its *Column ID*, under *Column Label enter the name you want displayed.
-2. Select **Apply**
-
-![Screenshot of a thresholds in grid with the above settings](./media/workbooks-grid-visualizations/thresholds-workbooks-links.png)
## Fractional units percentages
The image below shows the same table, except the first column is set to 50% widt
Combining fr, %, px, and ch widths is possible and works similarly to the previous examples. The widths that are set by the static units (ch and px) are hard constants that won't change even if the window/resolution is changed. The columns set by % will take up their percentage based on the total grid width (might not be exact due to previously minimum widths). The columns set with fr will just split up the remaining grid space based on the number of fractional units they are allotted. [![Screenshot of columns in grid with assortment of different width units used](./media/workbooks-grid-visualizations/custom-column-width-fr3.png)](./media/workbooks-grid-visualizations/custom-column-width-fr3.png#lightbox)-
-## Next steps
-
-* Learn how to create a [tree in workbooks](workbooks-tree-visualizations.md).
-* Learn how to create [workbook text parameters](workbooks-text.md).
azure-monitor Workbooks Link Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-link-actions.md
Title: Azure Monitor Workbooks link actions description: How to use link actions in Azure Monitor Workbooks Previously updated : 01/07/2021 Last updated : 06/23/2022++ # Link actions
Link actions can be accessed through Workbook link steps or through column setti
| Link action | Action on click | |:- |:-|
-| `Generic Details` | Shows the row values in a property grid context view. |
-| `Cell Details` | Shows the cell value in a property grid context view. Useful when the cell contains a dynamic type with information (for example, json with request properties like location, role instance, etc.). |
-| `Url` | The value of the cell is expected to be a valid http url, and the cell will be a link that opens up that url in a new tab.|
+|Generic Details| Shows the row values in a property grid context view. |
+|Cell Details| Shows the cell value in a property grid context view. Useful when the cell contains a dynamic type with information (for example, json with request properties like location, role instance, etc.). |
+|Url| The value of the cell is expected to be a valid http url, and the cell will be a link that opens up that url in a new tab.|
## Application Insights | Link action | Action on click | |:- |:-|
-| `Custom Event Details` | Opens the Application Insights search details with the custom event ID (`itemId`) in the cell. |
-| `* Details` | Similar to Custom Event Details, except for dependencies, exceptions, page views, requests, and traces. |
-| `Custom Event User Flows` | Opens the Application Insights User Flows experience pivoted on the custom event name in the cell. |
-| `* User Flows` | Similar to Custom Event User Flows except for exceptions, page views and requests. |
-| `User Timeline` | Opens the user timeline with the user ID (`user_Id`) in the cell. |
-| `Session Timeline` | Opens the Application Insights search experience for the value in the cell (for example, search for text 'abc' where abc is the value in the cell). |
-
-`*` denotes a wildcard for the above table
+|Custom Event Details| Opens the Application Insights search details with the custom event ID ("itemId") in the cell. |
+|Details| Similar to Custom Event Details, except for dependencies, exceptions, page views, requests, and traces. |
+|Custom Event User Flows| Opens the Application Insights User Flows experience pivoted on the custom event name in the cell. |
+|User Flows| Similar to Custom Event User Flows except for exceptions, page views and requests. |
+|User Timeline| Opens the user timeline with the user ID ("user_Id") in the cell. |
+|Session Timeline| Opens the Application Insights search experience for the value in the cell (for example, search for text 'abc' where abc is the value in the cell). |
## Azure resource | Link action | Action on click | |:- |:-|
-| `ARM Deployment` | Deploy an Azure Resource Manager template. When this item is selected, additional fields are displayed to let the author configure which Azure Resource Manager template to open, parameters for the template, etc. [See Azure Resource Manager deployment link settings](#azure-resource-manager-deployment-link-settings). |
-| `Create Alert Rule` | Creates an Alert rule for a resource. |
-| `Custom View` | Opens a custom View. When this item is selected, additional fields are displayed to let the author configure the View extension, View name, and any parameters used to open the View. [See custom view](#custom-view-link-settings). |
-| `Metrics` | Opens a metrics view. |
-| `Resource overview` | Open the resource's view in the portal based on the resource ID value in the cell. The author can also optionally set a `submenu` value that will open up a specific menu item in the resource view. |
-| `Workbook (template)` | Open a workbook template. When this item is selected, additional fields are displayed to let the author configure what template to open, etc. |
+|ARM Deployment| Deploy an Azure Resource Manager template. When this item is selected, additional fields are displayed to let the author configure which Azure Resource Manager template to open, parameters for the template, etc. [See Azure Resource Manager deployment link settings](#azure-resource-manager-deployment-link-settings). |
+|Create Alert Rule| Creates an Alert rule for a resource. |
+|Custom View| Opens a custom View. When this item is selected, additional fields are displayed to let the author configure the View extension, View name, and any parameters used to open the View. [See custom view](#custom-view-link-settings). |
+|Metrics| Opens a metrics view. |
+|Resource overview| Open the resource's view in the portal based on the resource ID value in the cell. The author can also optionally set a submenu value that will open up a specific menu item in the resource view. |
+|Workbook (template)| Open a workbook template. When this item is selected, additional fields are displayed to let the author configure what template to open, etc. |
## Link settings
When using the link renderer, the following settings are available:
| Setting | Explanation | |:- |:-|
-| `View to open` | Allows the author to select one of the actions enumerated above. |
-| `Menu item` | If "Resource Overview" is selected, this is the menu item in the resource's overview to open. This can be used to open alerts or activity logs instead of the "overview" for the resource. Menu item values are different for each Azure `Resourcetype`.|
-| `Link label` | If specified, this value will be displayed in the grid column. If this value is not specified, the value of the cell will be displayed. If you want another value to be displayed, like a heatmap or icon, do not use the `Link` renderer, instead use the appropriate renderer and select the `Make this item a link` option. |
-| `Open link in Context Blade` | If specified, the link will be opened as a popup "context" view on the right side of the window instead of opening as a full view. |
+|View to open| Allows the author to select one of the actions enumerated above. |
+|Menu item| If "Resource Overview" is selected, this is the menu item in the resource's overview to open. This can be used to open alerts or activity logs instead of the "overview" for the resource. Menu item values are different for each Azure Resource type.|
+|Link label| If specified, this value will be displayed in the grid column. If this value is not specified, the value of the cell will be displayed. If you want another value to be displayed, like a heatmap or icon, do not use the link renderer, instead use the appropriate renderer and select the **Make this item a link** option. |
+|Open link in Context Blade| If specified, the link will be opened as a popup "context" view on the right side of the window instead of opening as a full view. |
-When using the `Make this item a link` option, the following settings are available:
+When using the **Make this item a link** option, the following settings are available:
| Setting | Explanation | |:- |:-|
-| `Link value comes from` | When displaying a cell as a renderer with a link, this field specifies where the "link" value to be used in the link comes from, allowing the author to select from a dropdown of the other columns in the grid. For example, the cell may be a heatmap value, but you want the link to open up the Resource Overview for the resource ID in the row. In that case, you'd set the link value to come from the `Resource Id` field.
-| `View to open` | same as above. |
-| `Menu item` | same as above. |
-| `Open link in Context Blade` | same as above. |
+|Link value comes from| When displaying a cell as a renderer with a link, this field specifies where the "link" value to be used in the link comes from, allowing the author to select from a dropdown of the other columns in the grid. For example, the cell may be a heatmap value, but you want the link to open up the Resource Overview for the resource ID in the row. In that case, you'd set the link value to come from the **Resource ID** field.
+|View to open| Same as above. |
+|Menu item| Same as above. |
+|Open link in Context Blade| Same as above. |
## Azure Resource Manager deployment link settings
-If the selected link type is `ARM Deployment` the author must specify additional settings to open an Azure Resource Manager deployment. There are two main tabs for configurations.
+If the selected link type is **ARM Deployment** the author must specify additional settings to open an Azure Resource Manager deployment. There are two main tabs for configurations.
### Template settings
This section defines where the template should come from and the parameters used
| Source | Explanation | |:- |:-|
-|`Resource group id comes from` | The resource ID is used to manage deployed resources. The subscription is used to manage deployed resources and costs. The resource groups are used like folders to organize and manage all your resources. If this value is not specified, the deployment will fail. Select from `Cell`, `Column`, `Static Value`, or `Parameter` in [link sources](#link-sources).|
-|`ARM template URI from` | The URI to the Azure Resource Manager template itself. The template URI needs to be accessible to the users who will deploy the template. Select from `Cell`, `Column`, `Parameter`, or `Static Value` in [link sources](#link-sources). For starters, take a look at our [quickstart templates](https://azure.microsoft.com/resources/templates/).|
-|`ARM Template Parameters` | This section defines the template parameters used for the template URI defined above. These parameters will be used to deploy the template on the run page. The grid contains an expand toolbar button to help fill the parameters using the names defined in the template URI and set it to static empty values. This option can only be used when there are no parameters in the grid and the template URI has been set. The lower section is a preview of what the parameter output looks like. Select Refresh to update the preview with current changes. Parameters are typically values, whereas references are something that could point to keyvault secrets that the user has access to. <br/><br/> **Template Viewer blade limitation** - does not render reference parameters correctly and will show up as null/value, thus users will not be able to correctly deploy reference parameters from Template Viewer tab.|
+|Resource group id comes from| The resource ID is used to manage deployed resources. The subscription is used to manage deployed resources and costs. The resource groups are used like folders to organize and manage all your resources. If this value is not specified, the deployment will fail. Select from: Cell, Column, Static Value, or Parameter in [link sources](#link-sources).|
+|ARM template URI from| The URI to the Azure Resource Manager template itself. The template URI needs to be accessible to the users who will deploy the template. Select from: Cell, Column, Parameter, or Static Value in [link sources](#link-sources). For starters, take a look at our [quickstart templates](https://azure.microsoft.com/resources/templates/).|
+|ARM Template Parameters|Defines the template parameters used for the template URI defined above. These parameters will be used to deploy the template on the run page. The grid contains an expand toolbar button to help fill the parameters using the names defined in the template URI and set it to static empty values. This option can only be used when there are no parameters in the grid and the template URI has been set. The lower section is a preview of what the parameter output looks like. Select Refresh to update the preview with current changes. Parameters are typically values, whereas references are something that could point to key vault secrets that the user has access to. <br/><br/> **Template Viewer blade limitation** - does not render reference parameters correctly and will show up as null/value, thus users will not be able to correctly deploy reference parameters from Template Viewer tab.|
![Screenshot of Azure Resource Manager template settings](./media/workbooks-link-actions/template-settings.png)
This section configures what the users will see before they run the Azure Resour
| Source | Explanation | |:- |:-|
-|`Title from` | Title used on the run view. Select from `Cell`, `Column`, `Parameter`, or `Static Value` in [link sources](#link-sources).|
-|`Description from` | This is the markdown text used to provide a helpful description to users when they want to deploy the template. Select from `Cell`, `Column`, `Parameter`, or `Static Value` in [link sources](#link-sources). <br/><br/> **NOTE:** If `Static Value` is selected, a multi-line text box will appear. In this text box, you can resolve parameters using `{paramName}`. Also you can treat columns as parameters by appending `_column` after the column name like `{columnName_column}`. In the example image below, we can reference the column `VMName` by writing `{VMName_column}`. The value after the colon is the [parameter formatter](../visualize/workbooks-parameters.md#parameter-options), in this case it is `value`.|
-|`Run button text from` | Label used on the run (execute) button to deploy the Azure Resource Manager template. This is what users will select to start deploying the Azure Resource Manager template.|
+|Title from| Title used on the run view. Select from: Cell, Column, Parameter, or Static Value in [link sources](#link-sources).|
+|Description from| The markdown text used to provide a helpful description to users when they want to deploy the template. Select from: Cell, Column, Parameter, or Static Value in [link sources](#link-sources). <br/><br/> **NOTE:** If **Static Value** is selected, a multi-line text box will appear. In this text box, you can resolve parameters using "{paramName}". Also you can treat columns as parameters by appending "_column" after the column name like {columnName_column}. In the example image below, we can reference the column "VMName" by writing "{VMName_column}". The value after the colon is the [parameter formatter](../visualize/workbooks-parameters.md#parameter-options), in this case it is **value**.|
+|Run button text from| Label used on the run (execute) button to deploy the Azure Resource Manager template. This is what users will select to start deploying the Azure Resource Manager template.|
![Screenshot of Azure Resource Manager UX settings](./media/workbooks-link-actions/ux-settings.png)
-After these configurations are set, when the users select the link, it will open up the view with the UX described in the UX settings. If the user selects `Run button text from` it will deploy an Azure Resource Manager template using the values from [template settings](#template-settings). View template will open up the template viewer tab for the user to examine the template and the parameters before deploying.
+After these configurations are set, when the users select the link, it will open up the view with the UX described in the UX settings. If the user selects **Run button text from** it will deploy an Azure Resource Manager template using the values from [template settings](#template-settings). View template will open up the template viewer tab for the user to examine the template and the parameters before deploying.
![Screenshot of run Azure Resource Manager view](./media/workbooks-link-actions/run-tab.png) ## Custom view link settings
-Use this to open Custom Views in the Azure portal. Verify all of the configuration and settings. Incorrect values will cause errors in the portal or fail to open the views correctly. There are two ways to configure the settings via the `Form` or `URL`.
+Use this to open Custom Views in the Azure portal. Verify all of the configuration and settings. Incorrect values will cause errors in the portal or fail to open the views correctly. There are two ways to configure the settings via the form or URL.
> [!NOTE] > Views with a menu cannot be opened in a context tab. If a view with a menu is configured to open in a context tab then no context tab will be shown when the link is selected.
Use this to open Custom Views in the Azure portal. Verify all of the configurati
| Source | Explanation | |:- |:-|
-|`Extension name` | The name of the extension that hosts the name of the View.|
-|`View name` | The name of the View to open.|
+|Extension name| The name of the extension that hosts the name of the View.|
+|View name| The name of the View to open.|
#### View inputs
-There are two types of inputs, grids and JSON. Use `Grid` for simple key and value tab inputs or select `JSON` to specify a nested JSON input.
+There are two types of inputs, grids and JSON. Use grid for simple key and value tab inputs or select JSON to specify a nested JSON input.
- Grid
- - `Parameter Name`: The name of the View input parameter.
- - `Parameter Comes From`: Where the value of the View parameter should come from. Select from `Cell`, `Column`, `Parameter`, or `Static Value` in [link sources](#link-sources).
+ - **Parameter Name**: The name of the View input parameter.
+ - **Parameter Comes From**: Where the value of the View parameter should come from. Select from: Cell, Column, Parameter, or Static Value in [link sources](#link-sources).
> [!NOTE]
- > If `Static Value` is selected, the parameters can be resolved using brackets link `{paramName}` in the text box. Columns can be treated as parameters columns by appending `_column` after the column name like `{columnName_column}`.
+ > If **Static Value** is selected, the parameters can be resolved using brackets link "{paramName}" in the text box. Columns can be treated as parameters columns by appending `_column` after the column name like "{columnName_column}".
- - `Parameter Value`: depending on `Parameter Comes From`, this will be a dropdown of available parameters, columns, or a static value.
+ - **Parameter Value**: depending on `Parameter Comes From`, this will be a dropdown of available parameters, columns, or a static value.
![Screenshot of edit column setting show Custom View settings from form.](./media/workbooks-link-actions/custom-tab-settings.png) - JSON
If the selected link type is `Workbook (Template)`, the author must specify addi
| Setting | Explanation | |:- |:-|
-| `Workbook owner Resource Id` | This is the Resource ID of the Azure Resource that "owns" the workbook. Commonly it is an Application Insights resource, or a Log Analytics Workspace. Inside of Azure Monitor, this may also be the literal string `"Azure Monitor"`. When the workbook is Saved, this is what the workbook will be linked to. |
-| `Workbook resources` | An array of Azure Resource Ids that specify the default resource used in the workbook. For example, if the template being opened shows Virtual Machine metrics, the values here would be Virtual Machine resource IDs. Many times, the owner, and resources are set to the same settings. |
-| `Template Id` | Specify the ID of the template to be opened. If this is a community template from the gallery (the most common case), prefix the path to the template with `Community-`, like `Community-Workbooks/Performance/Apdex` for the `Workbooks/Performance/Apdex` template. If this is a link to a saved workbook/template, it is the full Azure resource ID of that item. |
-| `Workbook Type` | Specify the kind of workbook template to open. The most common cases use the `Default` or `Workbook` option to use the value in the current workbook. |
-| `Gallery Type` | This specifies the gallery type that will be displayed in the "Gallery" view of the template that opens. The most common cases use the `Default` or `Workbook` option to use the value in the current workbook. |
-|`Location comes from` | The location field should be specified if you are opening a specific Workbook resource. If location is not specified, finding the workbook content is much slower. If you know the location, specify it. If you do not know the location or are opening a template that with no specific location, leave this field as "Default".|
-|`Pass specific parameters to template` | Select to pass specific parameters to the template. If selected, only the specified parameters are passed to the template else all the parameters in the current workbook are passed to the template and in that case the parameter *names* must be the same in both workbooks for this parameter value to work.|
-|`Workbook Template Parameters` | This section defines the parameters that are passed to the target template. The name should match with the name of the parameter in the target template. Select value from `Cell`, `Column`, `Parameter`, and `Static Value`. Name and value must not be empty to pass that parameter to the target template.|
+|Workbook owner Resource Id| This is the Resource ID of the Azure Resource that "owns" the workbook. Commonly it is an Application Insights resource, or a Log Analytics Workspace. Inside of Azure Monitor, this may also be the literal string "Azure Monitor". When the workbook is saved, this is what the workbook will be linked to. |
+|Workbook resources| An array of Azure Resource Ids that specify the default resource used in the workbook. For example, if the template being opened shows Virtual Machine metrics, the values here would be Virtual Machine resource IDs. Many times, the owner, and resources are set to the same settings. |
+|Template Id| Specify the ID of the template to be opened. If this is a community template from the gallery (the most common case), prefix the path to the template with `Community-`, like `Community-Workbooks/Performance/Apdex` for the `Workbooks/Performance/Apdex` template. If this is a link to a saved workbook/template, it is the full Azure resource ID of that item. |
+|Workbook Type| Specify the kind of workbook template to open. The most common cases use the default or workbook option to use the value in the current workbook. |
+|Gallery Type| This specifies the gallery type that will be displayed in the "Gallery" view of the template that opens. The most common cases use the default or workbook option to use the value in the current workbook. |
+|Location comes from| The location field should be specified if you are opening a specific Workbook resource. If location is not specified, finding the workbook content is much slower. If you know the location, specify it. If you do not know the location or are opening a template that with no specific location, leave this field as "Default".|
+|Pass specific parameters to template| Select to pass specific parameters to the template. If selected, only the specified parameters are passed to the template else all the parameters in the current workbook are passed to the template and in that case the parameter *names* must be the same in both workbooks for this parameter value to work.|
+|Workbook Template Parameters| This section defines the parameters that are passed to the target template. The name should match with the name of the parameter in the target template. Select value from: Cell, Column, Parameter, and Static Value. Name and value must not be empty to pass that parameter to the target template.|
For each of the above settings, the author must pick where the value in the linked workbook will come from. See [link Sources](#link-sources)
When the workbook link is opened, the new workbook view will be passed all of th
| Source | Explanation | |:- |:-|
-| `Cell` | This will use the value in that cell in the grid as the link value |
-| `Column` | When selected, another field will be displayed to let the author select another column in the grid. The value of that column for the row will be used in the link value. This is commonly used to enable each row of a grid to open a different template, by setting `Template Id` field to `column`, or to open up the same workbook template for different resources, if the `Workbook resources` field is set to a column that contains an Azure Resource ID |
-| `Parameter` | When selected, another field will be displayed to let the author select a parameter. The value of that parameter will be used for the value when the link is clicked |
-| `Static value` | When selected, another field will be displayed to let the author type in a static value that will be used in the linked workbook. This is commonly used when all of the rows in the grid will use the same value for a field. |
-| `Step` | Use the value set in the current step of the workbook. This is common in query and metrics steps to set the workbook resources in the linked workbook to those used *in the query/metrics step*, not the current workbook |
-| `Workbook` | Use the value set in the current workbook. |
-| `Default` | Use the default value that would be used if no value was specified. This is common for Gallery Type, where the default gallery would be set by the type of the owner resource |
-
-## Next steps
-
+|Cell| This will use the value in that cell in the grid as the link value. |
+|Column| When selected, another field will be displayed to let the author select another column in the grid. The value of that column for the row will be used in the link value. This is commonly used to enable each row of a grid to open a different template, by setting the **Template Id** field to **column**, or to open up the same workbook template for different resources, if the **Workbook resources** field is set to a column that contains an Azure Resource ID. |
+|Parameter| When selected, another field will be displayed to let the author select a parameter. The value of that parameter will be used for the value when the link is clicked |
+|Static value| When selected, another field will be displayed to let the author type in a static value that will be used in the linked workbook. This is commonly used when all of the rows in the grid will use the same value for a field. |
+|Step| Use the value set in the current step of the workbook. This is common in query and metrics steps to set the workbook resources in the linked workbook to those used *in the query/metrics step*, not the current workbook. |
+|Workbook| Use the value set in the current workbook. |
+|Default| Use the default value that would be used if no value was specified. This is common for Gallery Type, where the default gallery would be set by the type of the owner resource. |
azure-monitor Workbooks Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-parameters.md
Last updated 10/23/2019
-# Creating Workbook parameters
+# Workbook parameters
Parameters allow workbook authors to collect input from the consumers and reference it in other parts of the workbook ΓÇô usually to scope the result set or setting the right visual. It is a key capability that allows authors to build interactive reports and experiences.
format | result
> If the parameter value is not valid json, the result of the format will be an empty value. ## Parameter Style
-The following styles are available to layout the parameters in a parameters step
+The following styles are available for the parameters.
#### Pills In pills style, the default style, the parameters look like text, and require the user to select them once to go into the edit mode.
azure-monitor Workbooks Renderers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-renderers.md
+
+ Title: Azure Workbook rendering options
+description: Learn about all the Azure Monitor workbook rendering options.
++ Last updated : 06/22/2022+++
+# Rendering options
+These rendering options can be used with grids, tiles, and graphs to produce the visualizations in optimal format.
+## Column renderers
+
+| Column Renderer | Explanation | More Options |
+|:- |:-|:-|
+| Automatic | The default - uses the most appropriate renderer based on the column type. | |
+| Text| Renders the column values as text. | |
+| Right Aligned| Renders the column values as right-aligned text. | |
+| Date/Time| Renders a readable date time string. | |
+| Heatmap| Colors the grid cells based on the value of the cell. | Color palette and min/max value used for scaling. |
+| Bar| Renders a bar next to the cell based on the value of the cell. | Color palette and min/max value used for scaling. |
+| Bar underneath | Renders a bar near the bottom of the cell based on the value of the cell. | Color palette and min/max value used for scaling. |
+| Composite bar| Renders a composite bar using the specified columns in that row. Refer [Composite Bar](workbooks-composite-bar.md) for details. | Columns with corresponding colors to render the bar and a label to display at the top of the bar. |
+|Spark bars| Renders a spark bar in the cell based on the values of a dynamic array in the cell. For example, the Trend column from the screenshot at the top. | Color palette and min/max value used for scaling. |
+|Spark lines| Renders a spark line in the cell based on the values of a dynamic array in the cell. | Color palette and min/max value used for scaling. |
+|Icon| Renders icons based on the text values in the cell. Supported values include:<br><ul><li>canceled</li><li>critical</li><li>disabled</li><li>error</li><li>failed</li> <li>info</li><li>none</li><li>pending</li><li>stopped</li><li>question</li><li>success</li><li>unknown</li><li>warning</li><li>uninitialized</li><li>resource</li><li>up</li> <li>down</li><li>left</li><li>right</li><li>trendup</li><li>trenddown</li><li>4</li><li>3</li><li>2</li><li>1</li><li>Sev0</li><li>Sev1</li><li>Sev2</li><li>Sev3</li><li>Sev4</li><li>Fired</li><li>Resolved</li><li>Available</li><li>Unavailable</li><li>Degraded</li><li>Unknown</li><li>Blank</li></ul>| |
+| Link | Renders a link that when clicked or performs a configurable action. Use this setting if you **only** want the item to be a link. Any of the other types of renderers can also be a link by using the **Make this item a link** setting. For more information, see [Link Actions](#link-actions). | |
+| Location | Renders a friendly Azure region name based on a region ID. | |
+| Resource type | Renders a friendly resource type string based on a resource type ID. | |
+| Resource| Renders a friendly resource name and link based on a resource ID. | Option to show the resource type icon |
+| Resource group | Renders a friendly resource group name and link based on a resource group ID. If the value of the cell is not a resource group, it will be converted to one. | Option to show the resource group icon |
+|Subscription| Renders a friendly subscription name and link based on a subscription ID. if the value of the cell is not a subscription, it will be converted to one. | Option to show the subscription icon. |
+|Hidden| Hides the column in the grid. Useful when the default query returns more columns than needed but a project-away is not desired | |
+
+## Link actions
+
+If the **Link** renderer is selected or the **Make this item a link** checkbox is selected, the author can configure a link action to occur when the user selects the cell to taking the user to another view with context coming from the cell, or to open up a url.
+
+## Using thresholds with links
+
+The instructions below will show you how to use thresholds with links to assign icons and open different workbooks. Each link in the grid will open up a different workbook template for that Application Insights resource.
+
+1. Switch the workbook to edit mode by selecting **Edit** toolbar item.
+1. Select **Add** then **Add query**.
+1. Change the **Data source** to "JSON" and **Visualization** to "Grid".
+1. Enter this query.
+
+ ```json
+ [
+ { "name": "warning", "link": "Community-Workbooks/Performance/Performance Counter Analysis" },
+ { "name": "info", "link": "Community-Workbooks/Performance/Performance Insights" },
+ { "name": "error", "link": "Community-Workbooks/Performance/Apdex" }
+ ]
+ ```
+
+1. Run query.
+1. Select **Column Settings** to open the settings.
+1. Select "name" from **Columns**.
+1. Under **Column renderer**, choose "Thresholds".
+1. Enter and choose the following **Threshold Settings**.
+
+ Keep the default row as is. You may enter whatever text you like. The Text column takes a String format as an input and populates it with the column value and unit if specified. For example, if warning is the column value the text can be "{0} {1} link!", it will be displayed as "warning link!".
+
+ | Operator | Value | Icons |
+ |-|||
+ | == | warning | Warning |
+ | == | error | Failed |
+
+ ![Screenshot of Edit column settings tab with the above settings.](./media/workbooks-grid-visualizations/column-settings.png)
+
+1. Select the **Make this item a link** box.
+ - Under **View to open**, choose **Workbook (Template)**.
+ - Under **Link value comes from**, choose **link**.
+ - Select the **Open link in Context Blade** box.
+ - Choose the following settings in **Workbook Link Settings**
+ - Under **Template Id comes from**, choose **Column**.
+ - Under **Column** choose **link**.
+
+ ![Screenshot of link settings with the above settings.](./media/workbooks-grid-visualizations/make-this-item-a-link.png)
+
+1. Select **link** from **Columns**. Under **Settings**, next to **Column renderer**, select **(Hide column)**.
+1. To change the display name of the **name** column, select the **Labels** tab. On the row with **name** as its **Column ID**, under **Column Label** enter the name you want displayed.
+1. Select **Apply**.
+
+ ![Screenshot of a thresholds in grid with the above settings.](./media/workbooks-grid-visualizations/thresholds-workbooks-links.png)
azure-monitor Workbooks Sample Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-sample-links.md
+
+ Title: Sample Azure Workbooks with links
+description: See sample Azure Workbooks.
++++ Last updated : 05/30/2022+++
+# Sample Azure Workbooks with links
+This article includes sample Azure Workbooks.
+
+## Sample workbook with links
+
+```json
+{
+ "version": "Notebook/1.0",
+ "items": [
+ {
+ "type": 11,
+ "content": {
+ "version": "LinkItem/1.0",
+ "style": "tabs",
+ "links": [
+ {
+ "cellValue": "selectedTab",
+ "linkTarget": "parameter",
+ "linkLabel": "Tab 1",
+ "subTarget": "1",
+ "style": "link"
+ },
+ {
+ "cellValue": "selectedTab",
+ "linkTarget": "parameter",
+ "linkLabel": "Tab 2",
+ "subTarget": "2",
+ "style": "link"
+ }
+ ]
+ },
+ "name": "links - 0"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "# selectedTab is {selectedTab}\r\n(this text step is always visible, but shows the value of the `selectedtab` parameter)"
+ },
+ "name": "always visible text"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "## content only visible when `selectedTab == 1`"
+ },
+ "conditionalVisibility": {
+ "parameterName": "selectedTab",
+ "comparison": "isEqualTo",
+ "value": "1"
+ },
+ "name": "selectedTab 1 text"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "## content only visible when `selectedTab == 2`"
+ },
+ "conditionalVisibility": {
+ "parameterName": "selectedTab",
+ "comparison": "isEqualTo",
+ "value": "2"
+ },
+ "name": "selectedTab 2"
+ }
+ ],
+ "styleSettings": {},
+ "$schema": "https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json"
+}
+```
+
+## Sample workbook with toolbar links
+
+```json
+{
+ "version": "Notebook/1.0",
+ "items": [
+ {
+ "type": 9,
+ "content": {
+ "version": "KqlParameterItem/1.0",
+ "crossComponentResources": [
+ "value::all"
+ ],
+ "parameters": [
+ {
+ "id": "eddb3313-641d-4429-b467-4793d6ed3575",
+ "version": "KqlParameterItem/1.0",
+ "name": "Subscription",
+ "type": 6,
+ "isRequired": true,
+ "multiSelect": true,
+ "quote": "'",
+ "delimiter": ",",
+ "query": "where type =~ \"microsoft.web/sites\"\r\n| summarize count() by subscriptionId\r\n| order by subscriptionId asc\r\n| project subscriptionId, label=subscriptionId, selected=row_number()==1",
+ "crossComponentResources": [
+ "value::all"
+ ],
+ "value": [],
+ "typeSettings": {
+ "additionalResourceOptions": [],
+ "showDefault": false
+ },
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 1,
+ "resourceType": "microsoft.resourcegraph/resources"
+ },
+ {
+ "id": "8beaeea6-9550-4574-a51e-bb0c16e68e84",
+ "version": "KqlParameterItem/1.0",
+ "name": "site",
+ "type": 5,
+ "description": "global parameter set by selection in the grid",
+ "isRequired": true,
+ "isGlobal": true,
+ "query": "where type =~ \"microsoft.web/sites\"\r\n| project id",
+ "crossComponentResources": [
+ "{Subscription}"
+ ],
+ "isHiddenWhenLocked": true,
+ "typeSettings": {
+ "additionalResourceOptions": [],
+ "showDefault": false
+ },
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 1,
+ "resourceType": "microsoft.resourcegraph/resources"
+ },
+ {
+ "id": "0311fb5a-33ca-48bd-a8f3-d3f57037a741",
+ "version": "KqlParameterItem/1.0",
+ "name": "properties",
+ "type": 1,
+ "isRequired": true,
+ "isGlobal": true,
+ "isHiddenWhenLocked": true
+ }
+ ],
+ "style": "above",
+ "queryType": 1,
+ "resourceType": "microsoft.resourcegraph/resources"
+ },
+ "name": "parameters - 0"
+ },
+ {
+ "type": 11,
+ "content": {
+ "version": "LinkItem/1.0",
+ "style": "toolbar",
+ "links": [
+ {
+ "id": "7ea6a29e-fc83-40cb-a7c8-e57f157b1811",
+ "cellValue": "{site}",
+ "linkTarget": "ArmAction",
+ "linkLabel": "Start",
+ "postText": "Start website {site:name}",
+ "style": "primary",
+ "icon": "Start",
+ "linkIsContextBlade": true,
+ "armActionContext": {
+ "pathSource": "static",
+ "path": "{site}/start",
+ "headers": [],
+ "params": [
+ {
+ "key": "api-version",
+ "value": "2019-08-01"
+ }
+ ],
+ "isLongOperation": false,
+ "httpMethod": "POST",
+ "titleSource": "static",
+ "title": "Start {site:name}",
+ "descriptionSource": "static",
+ "description": "Attempt to start:\n\n{site:grid}",
+ "resultMessage": "Start {site:name} completed",
+ "runLabelSource": "static",
+ "runLabel": "Start"
+ }
+ },
+ {
+ "id": "676a0860-6ec8-4c4f-a3b8-a98af422ae47",
+ "cellValue": "{site}",
+ "linkTarget": "ArmAction",
+ "linkLabel": "Stop",
+ "postText": "Stop website {site:name}",
+ "style": "primary",
+ "icon": "Stop",
+ "linkIsContextBlade": true,
+ "armActionContext": {
+ "pathSource": "static",
+ "path": "{site}/stop",
+ "headers": [],
+ "params": [
+ {
+ "key": "api-version",
+ "value": "2019-08-01"
+ }
+ ],
+ "isLongOperation": false,
+ "httpMethod": "POST",
+ "titleSource": "static",
+ "title": "Stop {site:name}",
+ "descriptionSource": "static",
+ "description": "# Attempt to Stop:\n\n{site:grid}",
+ "resultMessage": "Stop {site:name} completed",
+ "runLabelSource": "static",
+ "runLabel": "Stop"
+ }
+ },
+ {
+ "id": "5e48925f-f84f-4a2d-8e69-6a4deb8a3007",
+ "cellValue": "{properties}",
+ "linkTarget": "CellDetails",
+ "linkLabel": "Properties",
+ "postText": "View the properties for Start website {site:name}",
+ "style": "secondary",
+ "icon": "Properties",
+ "linkIsContextBlade": true
+ }
+ ]
+ },
+ "name": "site toolbar"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "where type =~ \"microsoft.web/sites\"\r\n| extend properties=tostring(properties)\r\n| project-away name, type",
+ "size": 0,
+ "showAnalytics": true,
+ "title": "Web Apps in Subscription {Subscription:name}",
+ "showRefreshButton": true,
+ "exportedParameters": [
+ {
+ "fieldName": "id",
+ "parameterName": "site"
+ },
+ {
+ "fieldName": "",
+ "parameterName": "properties",
+ "parameterType": 1
+ }
+ ],
+ "showExportToExcel": true,
+ "queryType": 1,
+ "resourceType": "microsoft.resourcegraph/resources",
+ "crossComponentResources": [
+ "{Subscription}"
+ ],
+ "gridSettings": {
+ "formatters": [
+ {
+ "columnMatch": "properties",
+ "formatter": 5
+ }
+ ],
+ "filter": true,
+ "sortBy": [
+ {
+ "itemKey": "subscriptionId",
+ "sortOrder": 1
+ }
+ ]
+ },
+ "sortBy": [
+ {
+ "itemKey": "subscriptionId",
+ "sortOrder": 1
+ }
+ ]
+ },
+ "name": "query - 1"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "## How this workbook works\r\n1. The parameters step declares a `site` resource parameter that is hidden in reading mode, but uses the same query as the grid. this parameter is marked `global`, and has no default selection. The parameters also declares a `properties` hidden text parameter. These parameters are [declared global](https://github.com/microsoft/Application-Insights-Workbooks/blob/master/Documentation/Parameters/Parameters.md#global-parameters) so they can be set by the grid below, but the toolbar can appear *above* the grid.\r\n\r\n2. The workbook has a links step, that renders as a toolbar. This toolbar has items to start a website, stop a website, and show the Azure Resourcve Graph properties for that website. The toolbar buttons all reference the `site` parameter, which by default has no selection, so the toolbar buttons are disabled. The start and stop buttons use the [ARM Action](https://github.com/microsoft/Application-Insights-Workbooks/blob/master/Documentation/Links/LinkActions.md#arm-action-settings) feature to run an action against the selected resource.\r\n\r\n3. The workbook has an Azure Resource Graph (ARG) query step, which queries for web sites in the selected subscriptions, and displays them in a grid. When a row is selected, the selected resource is exported to the `site` global parameter, causing start and stop buttons to become enabled. The ARG properties are also exported to the `properties` parameter, causing the poperties button to become enabled.",
+ "style": "info"
+ },
+ "conditionalVisibility": {
+ "parameterName": "debug",
+ "comparison": "isEqualTo",
+ "value": "true"
+ },
+ "name": "text - 3"
+ }
+ ],
+ "fallbackResourceIds": [
+ "Azure Monitor"
+ ],
+ "$schema": "https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json"
+}
+```
azure-monitor Workbooks Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-visualizations.md
# Workbook visualizations
-Workbooks provide a rich set of capabilities for visualizing Azure Monitor data. The exact set of capability depends on the data source and result set, but authors can expect them to converge over time. These controls allow authors to present their analysis in rich, interactive reports.
+Workbooks provide a rich set of capabilities for visualizing Azure Monitor data. The exact set of capabilities depends on the data sources and result sets, but authors can expect them to converge over time. These controls allow authors to present their analysis in rich, interactive reports.
Workbooks support these kinds of visual components: * [Text parameters](#text-parameters)
Workbooks support these kinds of visual components:
* [Text visualization](#text-visualizations) > [!NOTE]
-> Each visualization and data source may have its own [Limits](workbooks-limits.md).
+> Each visualization and data source may have its own [limits](workbooks-limits.md).
## Examples
Workbooks support these kinds of visual components:
:::image type="content" source="media/workbooks-visualizations/workbooks-text-visualization-example.png" alt-text="Example screenshot of an Azure workbooks text visualization."::: ## Next steps
+ - [Getting started with Azure Workbooks](workbooks-getting-started.md)
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
### General -- [Azure Monitor cost and usage](usage-estimated-costs.md) - Added standard web tests to table<br>Added explanation of billable GB calculation-- [Azure Monitor overview](overview.md) - Updated overview diagram
+| Article | Description |
+|:|:|
+| [Azure Monitor cost and usage](usage-estimated-costs.md) | Added standard web tests to table<br>Added explanation of billable GB calculation |
+| [Azure Monitor overview](overview.md) | Updated overview diagram |
### Agents -- [Azure Monitor agent extension versions](agents/azure-monitor-agent-extension-versions.md) - Update to latest extension version-- [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md) - Added supported resource types-- [Collect text and IIS logs with Azure Monitor agent (preview)](agents/data-collection-text-log.md) - Corrected error in data collection rule-- [Overview of the Azure monitoring agents](agents/agents-overview.md) - Added new OS supported for agent-- [Resource Manager template samples for agents](agents/resource-manager-agent.md) - Added Bicep examples-- [Resource Manager template samples for data collection rules](agents/resource-manager-data-collection-rules.md) - Fixed bug in sample parameter file-- [Rsyslog data not uploaded due to Full Disk space issue on AMA Linux Agent](agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) - New article-- [Troubleshoot the Azure Monitor agent on Linux virtual machines and scale sets](agents/azure-monitor-agent-troubleshoot-linux-vm.md) - New article-- [Troubleshoot the Azure Monitor agent on Windows Arc-enabled server](agents/azure-monitor-agent-troubleshoot-windows-arc.md) - New article-- [Troubleshoot the Azure Monitor agent on Windows virtual machines and scale sets](agents/azure-monitor-agent-troubleshoot-windows-vm.md) - New article
+| Article | Description |
+|:|:|
+| [Azure Monitor agent extension versions](agents/azure-monitor-agent-extension-versions.md) | Update to latest extension version |
+| [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md) | Added supported resource types |
+| [Collect text and IIS logs with Azure Monitor agent (preview)](agents/data-collection-text-log.md) | Corrected error in data collection rule |
+| [Overview of the Azure monitoring agents](agents/agents-overview.md) | Added new OS supported for agent |
+| [Resource Manager template samples for agents](agents/resource-manager-agent.md) | Added Bicep examples |
+| [Resource Manager template samples for data collection rules](agents/resource-manager-data-collection-rules.md) | Fixed bug in sample parameter file |
+| [Rsyslog data not uploaded due to Full Disk space issue on AMA Linux Agent](agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) | New article |
+| [Troubleshoot the Azure Monitor agent on Linux virtual machines and scale sets](agents/azure-monitor-agent-troubleshoot-linux-vm.md) | New article |
+| [Troubleshoot the Azure Monitor agent on Windows Arc-enabled server](agents/azure-monitor-agent-troubleshoot-windows-arc.md) | New article |
+| [Troubleshoot the Azure Monitor agent on Windows virtual machines and scale sets](agents/azure-monitor-agent-troubleshoot-windows-vm.md) | New article |
### Alerts -- [IT Service Management Connector - Secure Webhook in Azure Monitor - Azure Configurations](alerts/itsm-connector-secure-webhook-connections-azure-configuration.md) - Added the workflow for ITSM management and removed all references to SCSM.-- [Overview of Azure Monitor Alerts](alerts/alerts-overview.md) - Complete rewrite-- [Resource Manager template samples for log query alerts](alerts/resource-manager-alerts-log.md) - Bicep samples for alerting have been added to the Resource Manager template samples articles.-- [Supported resources for metric alerts in Azure Monitor](alerts/alerts-metric-near-real-time.md) - Added a newly supported resource type
+| Article | Description |
+|:|:|
+| [IT Service Management Connector | Secure Webhook in Azure Monitor | Azure Configurations](alerts/itsm-connector-secure-webhook-connections-azure-configuration.md) | Added the workflow for ITSM management and removed all references to SCSM. |
+| [Overview of Azure Monitor Alerts](alerts/alerts-overview.md) | Complete rewrite |
+| [Resource Manager template samples for log query alerts](alerts/resource-manager-alerts-log.md) | Bicep samples for alerting have been added to the Resource Manager template samples articles. |
+| [Supported resources for metric alerts in Azure Monitor](alerts/alerts-metric-near-real-time.md) | Added a newly supported resource type. |
### Application Insights -- [Application Map in Azure Application Insights](app/app-map.md) - Application Maps Intelligent View feature-- [Azure Application Insights for ASP.NET Core applications](app/asp-net-core.md) - telemetry.Flush() guidance is now available.-- [Diagnose with Live Metrics Stream - Azure Application Insights](app/live-stream.md) - Updated information on using unsecure control channel.-- [Migrate an Azure Monitor Application Insights classic resource to a workspace-based resource](app/convert-classic-resource.md) - Schema change documentation is now available here.-- [Profile production apps in Azure with Application Insights Profiler](profiler/profiler-overview.md) - Profiler documentation now has a new home in the table of contents.-- All references to unsupported versions of .NET and .NET CORE have been scrubbed from Application Insights product documentation. See [.NET and >NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)
+| Article | Description |
+|:|:|
+| [Application Map in Azure Application Insights](app/app-map.md) | Application Maps Intelligent View feature |
+| [Azure Application Insights for ASP.NET Core applications](app/asp-net-core.md) | telemetry.Flush() guidance is now available. |
+| [Diagnose with Live Metrics Stream](app/live-stream.md) | Updated information on using unsecure control channel. |
+| [Migrate an Azure Monitor Application Insights classic resource to a workspace-based resource](app/convert-classic-resource.md) | Schema change documentation is now available here. |
+| [Profile production apps in Azure with Application Insights Profiler](profiler/profiler-overview.md) | Profiler documentation now has a new home in the table of contents. |
+
+All references to unsupported versions of .NET and .NET CORE have been scrubbed from Application Insights product documentation. See [.NET and >NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)
### Change Analysis -- [Navigate to a change using custom filters in Change Analysis](change/change-analysis-custom-filters.md) - New article-- [Pin and share a Change Analysis query to the Azure dashboard](change/change-analysis-query.md) - New article-- [Use Change Analysis in Azure Monitor to find web-app issues](change/change-analysis.md) - Added details enabling for web app in-guest changes
+| Article | Description |
+|:|:|
+| [Navigate to a change using custom filters in Change Analysis](change/change-analysis-custom-filters.md) | New article |
+| [Pin and share a Change Analysis query to the Azure dashboard](change/change-analysis-query.md) | New article |
+| [Use Change Analysis in Azure Monitor to find web-app issues](change/change-analysis.md) | Added details enabling for web app in-guest changes |
### Containers -- [Configure ContainerLogv2 schema (preview) for Container Insights](containers/container-insights-logging-v2.md) - New article describing new schema for container logs-- [Enable Container insights](containers/container-insights-onboard.md) - General rewrite to improve clarity-- [Resource Manager template samples for Container insights](containers/resource-manager-container-insights.md) - Added Bicep examples
+| Article | Description |
+|:|:|
+| [Configure ContainerLogv2 schema (preview) for Container Insights](containers/container-insights-logging-v2.md) | New article describing new schema for container logs |
+| [Enable Container insights](containers/container-insights-onboard.md) | General rewrite to improve clarity |
+| [Resource Manager template samples for Container insights](containers/resource-manager-container-insights.md) | Added Bicep examples |
### Insights -- [Troubleshoot SQL Insights (preview)](insights/sql-insights-troubleshoot.md) - Added known issue for OS computer name.
+| Article | Description |
+|:|:|
+| [Troubleshoot SQL Insights (preview)](insights/sql-insights-troubleshoot.md) | Added known issue for OS computer name. |
### Logs -- [Azure Monitor customer-managed key](logs/customer-managed-keys.md) - Update limitations and constraint.-- [Design a Log Analytics workspace architecture](logs/workspace-design.md) - Complete rewrite to better describe decision criteria and include Sentinel considerations-- [Manage access to Log Analytics workspaces](logs/manage-access.md) - Consolidated and rewrote all content on configuring workspace access-- [Restore logs in Azure Monitor (Preview)](logs/restore.md) - Documented new Log Analytics table management configuration UI, which lets you configure a table's log plan and archive and retention policies.
+| Article | Description |
+|:|:|
+| [Azure Monitor customer-managed key](logs/customer-managed-keys.md) | Update limitations and constraint. |
+| [Design a Log Analytics workspace architecture](logs/workspace-design.md) | Complete rewrite to better describe decision criteria and include Sentinel considerations |
+| [Manage access to Log Analytics workspaces](logs/manage-access.md) | Consolidated and rewrote all content on configuring workspace access |
+| [Restore logs in Azure Monitor (Preview)](logs/restore.md) | Documented new Log Analytics table management configuration UI, which lets you configure a table's log plan and archive and retention policies. |
### Virtual Machines -- [Migrate from VM insights guest health (preview) to Azure Monitor log alerts](vm/vminsights-health-migrate.md) - New article describing process to replace VM guest health with alert rules-- [VM insights guest health (preview)](vm/vminsights-health-overview.md) - Added deprecation statement
+| Article | Description |
+|:|:|
+| [Migrate from VM insights guest health (preview) to Azure Monitor log alerts](vm/vminsights-health-migrate.md) | New article describing process to replace VM guest health with alert rules |
+| [VM insights guest health (preview)](vm/vminsights-health-overview.md) | Added deprecation statement |
azure-netapp-files Azacsnap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-get-started.md
Once these downloads are completed, then follow the steps in this guide to insta
### Verifying the download The installer has an associated PGP signature file with an `.asc` filename extension. This file can be used to ensure the installer downloaded is a verified
-Microsoft provided file. The Microsoft PGP Public Key used for signing Linux packages is available here (<https://packages.microsoft.com/keys/microsoft.asc>)
-and has been used to sign the signature file.
+Microsoft provided file. The [Microsoft PGP Public Key used for signing Linux packages](https://packages.microsoft.com/keys/microsoft.asc) has been used to sign the signature file.
The Microsoft PGP Public Key can be imported to a user's local as follows:
azure-netapp-files Azacsnap Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-tips.md
Explanation of the above crontab.
config files are. - `azacsnap -c .....` = the full azacsnap command to run, including all the options.
-Further explanation of cron and the format of the crontab file here: <https://en.wikipedia.org/wiki/Cron>
+For more information about cron and the format of the crontab file, see [cron](https://wikipedia.org/wiki/Cron).
> [!NOTE] > Users are responsible for monitoring the cron jobs to ensure snapshots are being
A storage volume snapshot can be restored to a new volume (`-c restore --restore
A snapshot can be copied back to the SAP HANA data area, but SAP HANA must not be running when a copy is made (`cp /hana/data/H80/mnt00001/.snapshot/hana_hourly.2020-06-17T113043.1586971Z/*`).
-For Azure Large Instance, you could contact the Microsoft operations team by opening a service request to restore a desired snapshot from the existing available snapshots. You can open a service request from Azure portal: <https://portal.azure.com>
+For Azure Large Instance, you could contact the Microsoft operations team by opening a service request to restore a desired snapshot from the existing available snapshots. You can open a service request via the [Azure portal](https://portal.azure.com).
If you decide to perform the disaster recovery failover, the `azacsnap -c restore --restore revertvolume` command at the DR site will automatically make available the most recent (`/hana/data` and `/hana/logbackups`) volume snapshots to allow for an SAP HANA recovery. Use this command with caution as it breaks replication between production and DR sites.
A 'boot' snapshot can be recovered as follows:
1. The customer will need to shut down the server. 1. After the Server is shut down, the customer will need to open a service request that contains the Machine ID and Snapshot to restore.
- > Customers can open a service request from the Azure portal: <https://portal.azure.com>
+ > Customers can open a service request via the [Azure portal](https://portal.azure.com).
1. Microsoft will restore the Operating System LUN using the specified Machine ID and Snapshot, and then boot the Server. 1. The customer will then need to confirm Server is booted and healthy.
azure-netapp-files Azure Netapp Files Resize Capacity Pools Or Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resize-capacity-pools-or-volumes.md
For information about monitoring a volumeΓÇÖs capacity, see [Monitor the capacit
## Resize the capacity pool using the Azure portal
-You can change the capacity pool size in 1-TiB increments or decrements. However, the capacity pool size cannot be smaller than 4 TiB. Resizing the capacity pool changes the purchased Azure NetApp Files capacity.
+You can change the capacity pool size in 1-TiB increments or decrements. However, the capacity pool size cannot be smaller than the sum of the capacity of the volumes hosted in the pool, with a minimum of 4TiB. Resizing the capacity pool changes the purchased Azure NetApp Files capacity.
1. From the NetApp Account view, go to **Capacity pools**, and click the capacity pool that you want to resize. 2. Right-click the capacity pool name or click the "…" icon at the end of the capacity pool row to display the context menu. Click **Resize**.
azure-netapp-files Performance Linux Nfs Read Ahead https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-nfs-read-ahead.md
To set a new value for read-ahead, run the following command:
### Example
-```
+```bash
#!/bin/bash # set | show readahead for a specific mount point # Useful for things like NFS and if you do not know / care about the backing device
To set a new value for read-ahead, run the following command:
# To the extent possible under law, Red Hat, Inc. has dedicated all copyright # to this software to the public domain worldwide, pursuant to the # CC0 Public Domain Dedication. This software is distributed without any warranty.
-# See <http://creativecommons.org/publicdomain/zero/1.0/>.
-#
+# For more information, see the [CC0 1.0 Public Domain Dedication](http://creativecommons.org/publicdomain/zero/1.0/).
E_BADARGS=22 function myusage() {
azure-percept Software Releases Usb Cable Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/software-releases-usb-cable-updates.md
This page provides information and download links for all the dev kit OS/firmwar
## Latest releases - **Latest service release**
-May Service Release (2205): [Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip](<https://download.microsoft.com/download/c/7/7/c7738a05-819c-48d9-8f30-e4bf64e19f11/Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip>)
+May Service Release (2205): [Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip](https://download.microsoft.com/download/c/7/7/c7738a05-819c-48d9-8f30-e4bf64e19f11/Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip)
- **Latest major update or known stable version** Feature Update (2104): [Azure-Percept-DK-1.0.20210409.2055.zip](https://download.microsoft.com/download/6/4/d/64d53e60-f702-432d-a446-007920a4612c/Azure-Percept-DK-1.0.20210409.2055.zip)
Feature Update (2104): [Azure-Percept-DK-1.0.20210409.2055.zip](https://download
|Release|Download Links|Note| |||::|
-|May Service Release (2205)|[Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip](<https://download.microsoft.com/download/c/7/7/c7738a05-819c-48d9-8f30-e4bf64e19f11/Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip>)||
-|March Service Release (2203)|[Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip](<https://download.microsoft.com/download/c/6/f/c6f6b152-699e-4f60-85b7-17b3ea57c189/Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip>)||
-|February Service Release (2202)|[Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip](<https://download.microsoft.com/download/f/8/6/f86ce7b3-8d76-494e-82d9-dcfb71fc2580/Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip>)||
-|January Service Release (2201)|[Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip](<https://download.microsoft.com/download/1/6/4/164cfcf2-ce52-4e75-9dee-63bb4a128e71/Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip>)||
-|November Service Release (2111)|[Azure-Percept-DK-1.0.20211124.1851-public_preview_1.0.zip](<https://download.microsoft.com/download/9/5/4/95464a73-109e-46c7-8624-251ceed0c5ea/Azure-Percept-DK-1.0.20211124.1851-public_preview_1.0.zip>)||
+|May Service Release (2205)|[Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip](https://download.microsoft.com/download/c/7/7/c7738a05-819c-48d9-8f30-e4bf64e19f11/Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip)||
+|March Service Release (2203)|[Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip](https://download.microsoft.com/download/c/6/f/c6f6b152-699e-4f60-85b7-17b3ea57c189/Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip)||
+|February Service Release (2202)|[Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip](https://download.microsoft.com/download/f/8/6/f86ce7b3-8d76-494e-82d9-dcfb71fc2580/Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip)||
+|January Service Release (2201)|[Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip](https://download.microsoft.com/download/1/6/4/164cfcf2-ce52-4e75-9dee-63bb4a128e71/Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip)||
+|November Service Release (2111)|[Azure-Percept-DK-1.0.20211124.1851-public_preview_1.0.zip](https://download.microsoft.com/download/9/5/4/95464a73-109e-46c7-8624-251ceed0c5ea/Azure-Percept-DK-1.0.20211124.1851-public_preview_1.0.zip)||
|September Service Release (2109)|[Azure-Percept-DK-1.0.20210929.1747-public_preview_1.0.zip](https://go.microsoft.com/fwlink/?linkid=2174462)|| |July Service Release (2107)|[Azure-Percept-DK-1.0.20210729.0957-public_preview_1.0.zip](https://download.microsoft.com/download/f/a/9/fa95d9d9-a739-493c-8fad-bccf839072c9/Azure-Percept-DK-1.0.20210729.0957-public_preview_1.0.zip)|| |June Service Release (2106)|[Azure-Percept-DK-1.0.20210611.0952-public_preview_1.0.zip](https://download.microsoft.com/download/1/5/8/1588f7e3-f8ae-4c06-baa2-c559364daae5/Azure-Percept-DK-1.0.20210611.0952-public_preview_1.0.zip)||
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
Title: Publish service catalog managed app description: Shows how to create an Azure managed application that is intended for members of your organization.-+ Previously updated : 08/16/2021- Last updated : 06/23/2022+ # Quickstart: Create and publish a managed application definition
This quickstart provides an introduction to working with [Azure Managed Applicat
To publish a managed application to your service catalog, you must:
-* Create a template that defines the resources to deploy with the managed application.
-* Define the user interface elements for the portal when deploying the managed application.
-* Create a _.zip_ package that contains the required template files.
-* Decide which user, group, or application needs access to the resource group in the user's subscription.
-* Create the managed application definition that points to the _.zip_ package and requests access for the identity.
+- Create a template that defines the resources to deploy with the managed application.
+- Define the user interface elements for the portal when deploying the managed application.
+- Create a _.zip_ package that contains the required template files.
+- Decide which user, group, or application needs access to the resource group in the user's subscription.
+- Create the managed application definition that points to the _.zip_ package and requests access for the identity.
## Create the ARM template
Add the following JSON to your file. It defines the parameters for creating a st
"resources": [ { "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2019-06-01",
+ "apiVersion": "2021-09-01",
"name": "[variables('storageAccountName')]", "location": "[parameters('location')]", "sku": {
Add the following JSON to your file. It defines the parameters for creating a st
"outputs": { "storageEndpoint": { "type": "string",
- "value": "[reference(resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName')), '2019-06-01').primaryEndpoints.blob]"
+ "value": "[reference(resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName')), '2021-09-01').primaryEndpoints.blob]"
} } }
$groupID=(Get-AzADGroup -DisplayName mygroup).Id
# [Azure CLI](#tab/azure-cli) ```azurecli-interactive
-groupid=$(az ad group show --group mygroup --query objectId --output tsv)
+groupid=$(az ad group show --group mygroup --query id --output tsv)
```
Next, you need the role definition ID of the Azure built-in role you want to gra
# [PowerShell](#tab/azure-powershell) ```azurepowershell-interactive
-$ownerID=(Get-AzRoleDefinition -Name Owner).Id
+$roleid=(Get-AzRoleDefinition -Name Owner).Id
``` # [Azure CLI](#tab/azure-cli) ```azurecli-interactive
-ownerid=$(az role definition list --name Owner --query [].name --output tsv)
+roleid=$(az role definition list --name Owner --query [].name --output tsv)
```
New-AzManagedApplicationDefinition `
-LockLevel ReadOnly ` -DisplayName "Managed Storage Account" ` -Description "Managed Azure Storage Account" `
- -Authorization "${groupID}:$ownerID" `
+ -Authorization "${groupID}:$roleid" `
-PackageFileUri $blob.ICloudBlob.StorageUri.PrimaryUri.AbsoluteUri ```
az managedapp definition create \
--lock-level ReadOnly \ --display-name "Managed Storage Account" \ --description "Managed Azure Storage Account" \
- --authorizations "$groupid:$ownerid" \
+ --authorizations "$groupid:$roleid" \
--package-file-uri "$blob" ```
When the command completes, you have a managed application definition in your re
Some of the parameters used in the preceding example are:
-* **resource group**: The name of the resource group where the managed application definition is created.
-* **lock level**: The type of lock placed on the managed resource group. It prevents the customer from performing undesirable operations on this resource group. Currently, ReadOnly is the only supported lock level. When ReadOnly is specified, the customer can only read the resources present in the managed resource group. The publisher identities that are granted access to the managed resource group are exempt from the lock.
-* **authorizations**: Describes the principal ID and the role definition ID that are used to grant permission to the managed resource group. It's specified in the format of `<principalId>:<roleDefinitionId>`. If more than one value is needed, specify them in the form `<principalId1>:<roleDefinitionId1>,<principalId2>:<roleDefinitionId2>`. The values are separated by a comma.
-* **package file URI**: The location of a _.zip_ package that contains the required files.
+- **resource group**: The name of the resource group where the managed application definition is created.
+- **lock level**: The type of lock placed on the managed resource group. It prevents the customer from performing undesirable operations on this resource group. Currently, ReadOnly is the only supported lock level. When ReadOnly is specified, the customer can only read the resources present in the managed resource group. The publisher identities that are granted access to the managed resource group are exempt from the lock.
+- **authorizations**: Describes the principal ID and the role definition ID that are used to grant permission to the managed resource group.
+
+ - **Azure PowerShell**: `"${groupid}:$roleid"` or you can use curly braces for each variable `"${groupid}:${roleid}"`. Use a comma to separate multiple values: `"${groupid1}:$roleid1", "${groupid2}:$roleid2"`.
+ - **Azure CLI**: `"$groupid:$roleid"` or you can use curly braces as shown in PowerShell. Use a space to separate multiple values: `"$groupid1:$roleid1" "$groupid2:$roleid2"`.
+
+- **package file URI**: The location of a _.zip_ package that contains the required files.
## Bring your own storage for the managed application definition
-You can choose to store your managed application definition within a storage account provided by you during creation so that its location and access can be fully managed by you for your regulatory needs.
+As an alternative, you can choose to store your managed application definition within a storage account provided by you during creation so that its location and access can be fully managed by you for your regulatory needs.
> [!NOTE] > Bring your own storage is only supported with ARM template or REST API deployments of the managed application definition.
Use the following ARM template to deploy your packaged managed application as a
} ```
-We have added a new property named `storageAccountId` to your `applicationDefinitions` properties and provide storage account ID you wish to store your definition in as its value:
-
-You can verify that the application definition files are saved in your provided storage account in a container titled `applicationDefinitions`.
+The `applicationDefinitions` properties include `storageAccountId` that contains the storage account ID for your storage account. You can verify that the application definition files are saved in your provided storage account in a container titled `applicationDefinitions`.
> [!NOTE]
-> For added security, you can create a managed applications definition store it in an [Azure storage account blob where encryption is enabled](../../storage/common/storage-service-encryption.md). The definition contents are encrypted through the storage account's encryption options. Only users with permissions to the file can see the definition in Service Catalog.
+> For added security, you can create a managed applications definition and store it in an [Azure storage account blob where encryption is enabled](../../storage/common/storage-service-encryption.md). The definition contents are encrypted through the storage account's encryption options. Only users with permissions to the file can see the definition in Service Catalog.
## Make sure users can see your definition
azure-signalr Signalr Quickstart Azure Signalr Service Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-signalr-service-arm-template.md
read -p "Press [ENTER] to continue: "
For a step-by-step tutorial that guides you through the process of creating an ARM template, see: > [!div class="nextstepaction"]
-> [ Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
+> [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
azure-signalr Signalr Quickstart Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-rest-api.md
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
## Sign in to Azure
-Sign in to the Azure portal at <https://portal.azure.com/> with your Azure account.
+Sign in to the [Azure portal](https://portal.azure.com) using your Azure account.
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsapi).
backup Backup Azure Recovery Services Vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-recovery-services-vault-overview.md
Title: Overview of Recovery Services vaults
description: An overview of Recovery Services vaults. Last updated 08/17/2020
+m
++ # Recovery Services vaults overview
-This article describes the features of a Recovery Services vault. A Recovery Services vault is a storage entity in Azure that houses data. The data is typically copies of data, or configuration information for virtual machines (VMs), workloads, servers, or workstations. You can use Recovery Services vaults to hold backup data for various Azure services such as IaaS VMs (Linux or Windows) and Azure SQL databases. Recovery Services vaults support System Center DPM, Windows Server, Azure Backup Server, and more. Recovery Services vaults make it easy to organize your backup data, while minimizing management overhead. Recovery Services vaults are based on the Azure Resource Manager model of Azure, which provides features such as:
+This article describes the features of a Recovery Services vault. A Recovery Services vault is a storage entity in Azure that houses data. The data is typically copies of data, or configuration information for virtual machines (VMs), workloads, servers, or workstations. You can use Recovery Services vaults to hold backup data for various Azure services such as IaaS VMs (Linux or Windows) and SQL Server in Azure VMs. Recovery Services vaults support System Center DPM, Windows Server, Azure Backup Server, and more. Recovery Services vaults make it easy to organize your backup data, while minimizing management overhead. Recovery Services vaults are based on the Azure Resource Manager model of Azure, which provides features such as:
- **Enhanced capabilities to help secure backup data**: With Recovery Services vaults, Azure Backup provides security capabilities to protect cloud backups. The security features ensure you can secure your backups, and safely recover data, even if production and backup servers are compromised. [Learn more](backup-azure-security-feature.md)
backup Backup Azure Sql Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-automation.md
The Recovery Services vault is a Resource Manager resource, so you must place it
3. Specify the type of redundancy to use for the vault storage. * You can use [locally redundant storage](../storage/common/storage-redundancy.md#locally-redundant-storage), [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage) or [zone-redundant storage](../storage/common/storage-redundancy.md#zone-redundant-storage) .
- * The following example sets the **-BackupStorageRedundancy** option for the[Set-AzRecoveryServicesBackupProperty](/powershell/module/az.recoveryservices/set-azrecoveryservicesbackupproperty) cmd for **testvault** set to **GeoRedundant**.
+ * The following example sets the **-BackupStorageRedundancy** option for the [Set-AzRecoveryServicesBackupProperty](/powershell/module/az.recoveryservices/set-azrecoveryservicesbackupproperty) cmd for **testvault** set to **GeoRedundant**.
```powershell $vault1 = Get-AzRecoveryServicesVault -Name "testvault"
backup Backup Vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-vault-overview.md
To create a Backup vault, follow these steps.
### Sign in to Azure
-Sign in to the Azure portal at <https://portal.azure.com>.
+Sign in to the [Azure portal](https://portal.azure.com).
### Create Backup vault
The vault move across subscriptions and resource groups is supported in all publ
:::image type="content" source="./media/backup-vault-overview/move-validation-process-to-move-to-resource-group-inline.png" alt-text="Screenshot showing the Backup vault validation status." lightbox="./media/backup-vault-overview/move-validation-process-to-move-to-resource-group-expanded.png":::
-1. Select the checkbox _I understand that tools and scripts associated with moved resources will not work until I update them to use new resource IDs_ΓÇÖ to confirm, and then select **Move**.
+1. Select the checkbox **I understand that tools and scripts associated with moved resources will not work until I update them to use new resource IDs** to confirm, and then select **Move**.
>[!Note] >The resource path changes after moving vault across resource groups or subscriptions. Ensure that you update the tools and scripts with the new resource path after the move operation completes.
Wait till the move operation is complete to perform any other operations on the
:::image type="content" source="./media/backup-vault-overview/move-validation-process-to-move-to-another-subscription-inline.png" alt-text="Screenshot showing the validation status of Backup vault to be moved to another Azure subscription." lightbox="./media/backup-vault-overview/move-validation-process-to-move-to-another-subscription-expanded.png":::
-1. Select the checkbox _I understand that tools and scripts associated with moved resources will not work until I update them to use new resource IDs_ to confirm, and then select **Move**.
+1. Select the checkbox **I understand that tools and scripts associated with moved resources will not work until I update them to use new resource IDs** to confirm, and then select **Move**.
>[!Note] >The resource path changes after moving vault across resource groups or subscriptions. Ensure that you update the tools and scripts with the new resource path after the move operation completes.
backup Manage Azure File Share Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-azure-file-share-rest-api.md
For example: To change the protection policy of *testshare* from *schedule1* to
## Stop protection but retain existing data
-You can remove protection on a protected file share but retain the data already backed up. To do so, remove the policy in the request body you used to[enable backup](backup-azure-file-share-rest-api.md#enable-backup-for-the-file-share) and submit the request. Once the association with the policy is removed, backups are no longer triggered, and no new recovery points are created.
+You can remove protection on a protected file share but retain the data already backed up. To do so, remove the policy in the request body you used to [enable backup](backup-azure-file-share-rest-api.md#enable-backup-for-the-file-share) and submit the request. Once the association with the policy is removed, backups are no longer triggered, and no new recovery points are created.
```json {
backup Transport Layer Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/transport-layer-security.md
If the machine is running earlier versions of Windows, the corresponding updates
|Operating system |KB article | |||
-|Windows Server 2008 SP2 | <https://support.microsoft.com/help/4019276> |
-|Windows Server 2008 R2, Windows 7, Windows Server 2012 | <https://support.microsoft.com/help/3140245> |
+|Windows Server 2008 SP2 | <https://support.microsoft.com/help/4019276> |
+|Windows Server 2008 R2, Windows 7, Windows Server 2012 | <https://support.microsoft.com/help/3140245> |
>[!NOTE] >The update will install the required protocol components. After installation, you must make the registry key changes mentioned in the KB articles above to properly enable the required protocols.
backup Tutorial Backup Windows Server To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-backup-windows-server-to-azure.md
You can use Azure Backup to protect your Windows Server from corruptions, attack
## Sign in to Azure
-Sign in to the Azure portal at <https://portal.azure.com>.
+Sign in to the [Azure portal](https://portal.azure.com).
## Create a Recovery Services vault
batch Batch Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-customer-managed-key.md
az batch account set \
## Next steps - Learn more about [security best practices in Azure Batch](security-best-practices.md).-- Learn more about[Azure Key Vault](../key-vault/general/basic-concepts.md).
+- Learn more about [Azure Key Vault](../key-vault/general/basic-concepts.md).
batch Batch Pool Compute Intensive Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-compute-intensive-sizes.md
To run CUDA applications on a pool of Windows NC nodes, you need to install NVDI
To run CUDA applications on a pool of Linux NC nodes, you need to install necessary NVIDIA Tesla GPU drivers from the CUDA Toolkit. The following sample steps create and deploy a custom Ubuntu 16.04 LTS image with the GPU drivers:
-1. Deploy an Azure NC-series VM running Ubuntu 16.04 LTS. For example, create the VM in the US South Central region.
-2. Add the [NVIDIA GPU Drivers extension](../virtual-machines/extensions/hpccompute-gpu-linux.md
-) to the VM by using the Azure portal, a client computer that connects to the Azure subscription, or Azure Cloud Shell. Alternatively, follow the steps to connect to the VM and [install CUDA drivers](../virtual-machines/linux/n-series-driver-setup.md) manually.
+1. Deploy an Azure NC-series VM running Ubuntu 16.04 LTS. For example, create the VM in the US South Central region.
+2. Add the [NVIDIA GPU Drivers extension](../virtual-machines/extensions/hpccompute-gpu-linux.md) to the VM by using the Azure portal, a client computer that connects to the Azure subscription, or Azure Cloud Shell. Alternatively, follow the steps to connect to the VM and [install CUDA drivers](../virtual-machines/linux/n-series-driver-setup.md) manually.
3. Follow the steps to create an [Azure Compute Gallery image](batch-sig-images.md) for Batch. 4. Create a Batch account in a region that supports NC VMs. 5. Using the Batch APIs or Azure portal, create a pool [using the custom image](batch-sig-images.md) and with the desired number of nodes and scale. The following table shows sample pool settings for the image:
cloud-services Cloud Services Nodejs Chat App Socketio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-chat-app-socketio.md
Azure emulator:
> > Reinstall AzureAuthoringTools v 2.7.1 and AzureComputeEmulator v 2.7 - make sure that version matches.
-2. Open a browser and navigate to **http://127.0.0.1**.
+2. Open a browser and navigate to `http://127.0.0.1`.
3. When the browser window opens, enter a nickname and then hit enter. This will allow you to post messages as a specific nickname. To test multi-user functionality, open additional browser windows using the
cognitive-services Get Suggested Search Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/get-suggested-search-terms.md
Familiarize yourself with the [Bing Autosuggest API v7](/rest/api/cognitiveservi
Visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore the other available APIs.
-Learn how to search the web by using the [Bing Web Search API](../bing-web-search/overview.md), and explore the other[Bing Search APIs](../bing-web-search/index.yml).
+Learn how to search the web by using the [Bing Web Search API](../bing-web-search/overview.md), and explore the other [Bing Search APIs](../bing-web-search/index.yml).
Be sure to read [Bing Use and Display Requirements](../bing-web-search/use-display-requirements.md) so you don't break any of the rules about using the search results.
cognitive-services Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/samples-dotnet.md
ms.devlang: csharp
The following list includes links to the code samples built using the Azure Content Moderator SDK for .NET. - **Image moderation**: [Evaluate an image for adult and racy content, text, and faces](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageModeration/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).-- **Custom images**: [Moderate with custom image lists](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageListManagement/Program.cs). See the[.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).
+- **Custom images**: [Moderate with custom image lists](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageListManagement/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).
> [!NOTE] > There is a maximum limit of **5 image lists** with each list to **not exceed 10,000 images**. > -- **Text moderation**: [Screen text for profanity and personal data](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/TextModeration/Program.cs). See the[.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).
+- **Text moderation**: [Screen text for profanity and personal data](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/TextModeration/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).
- **Custom terms**: [Moderate with custom term lists](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/TermListManagement/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp). > [!NOTE]
cognitive-services Export Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/export-programmatically.md
export = trainer.export_iteration(project_id, iteration_id, platform, flavor, ra
For more information, see the **[export_iteration](/python/api/azure-cognitiveservices-vision-customvision/azure.cognitiveservices.vision.customvision.training.operations.customvisiontrainingclientoperationsmixin#export-iteration-project-id--iteration-id--platform--flavor-none--custom-headers-none--raw-false-operation-config-)** method.
+> [!IMPORTANT]
+> If you've already exported a particular iteration, you cannot call the **export_iteration** method again. Instead, skip ahead to the **get_exports** method call to get a link to your existing exported model.
+ ## Download the exported model Next, you'll call the **get_exports** method to check the status of the export operation. The operation runs asynchronously, so you should poll this method until the operation completes. When it completes, you can retrieve the URI where you can download the model iteration to your device.
cognitive-services Get Started Build Detector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/get-started-build-detector.md
The training process should only take a few minutes. During this time, informati
## Evaluate the detector
-After training has completed, the model's performance is calculated and displayed. The Custom Vision service uses the images that you submitted for training to calculate precision, recall, and mean average precision, using a process called [k-fold cross validation](https://wikipedia.org/wiki/Cross-validation_(statistics)). Precision and recall are two different measurements of the effectiveness of a detector:
+After training has completed, the model's performance is calculated and displayed. The Custom Vision service uses the images that you submitted for training to calculate precision, recall, and mean average precision. Precision and recall are two different measurements of the effectiveness of a detector:
- **Precision** indicates the fraction of identified classifications that were correct. For example, if the model identified 100 images as dogs, and 99 of them were actually of dogs, then the precision would be 99%. - **Recall** indicates the fraction of actual classifications that were correctly identified. For example, if there were actually 100 images of apples, and the model identified 80 as apples, the recall would be 80%.
cognitive-services Getting Started Build A Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/getting-started-build-a-classifier.md
The training process should only take a few minutes. During this time, informati
## Evaluate the classifier
-After training has completed, the model's performance is estimated and displayed. The Custom Vision Service uses the images that you submitted for training to calculate precision and recall, using a process called [k-fold cross validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)). Precision and recall are two different measurements of the effectiveness of a classifier:
+After training has completed, the model's performance is estimated and displayed. The Custom Vision Service uses the images that you submitted for training to calculate precision and recall. Precision and recall are two different measurements of the effectiveness of a classifier:
- **Precision** indicates the fraction of identified classifications that were correct. For example, if the model identified 100 images as dogs, and 99 of them were actually of dogs, then the precision would be 99%. - **Recall** indicates the fraction of actual classifications that were correctly identified. For example, if there were actually 100 images of apples, and the model identified 80 as apples, the recall would be 80%.
cognitive-services Luis Concept Devops Sourcecontrol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-concept-devops-sourcecontrol.md
To save a LUIS app in `.lu` format and place it under source control:
### Build the LUIS app from source
-For a LUIS app, to *build from source* means to [create a new LUIS app version by importing the `.lu` source](./luis-how-to-manage-versions.md#import-version) , to [train the version](./luis-how-to-train.md) and to[publish it](./luis-how-to-publish-app.md). You can do this in the LUIS portal, or at the command line:
+For a LUIS app, to *build from source* means to [create a new LUIS app version by importing the `.lu` source](./luis-how-to-manage-versions.md#import-version) , to [train the version](./luis-how-to-train.md) and to [publish it](./luis-how-to-publish-app.md). You can do this in the LUIS portal, or at the command line:
- Use the LUIS portal to [import the `.lu` version](./luis-how-to-manage-versions.md#import-version) of the app from source control, and [train](./luis-how-to-train.md) and [publish](./luis-how-to-publish-app.md) the app.
cognitive-services Translator How To Install Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/containers/translator-how-to-install-container.md
All Cognitive Services containers require three primary elements:
* **EULA accept setting**. An end-user license agreement (EULA) set with a value of `Eula=accept`.
-* **API key** and **Endpoint URL**. The API key is used to start the container. You can retrieve the API key and Endpoint URL values by navigating to the Translator resource _Keys and Endpoint_ page and selecting the `Copy to clipboard` <span class="docon docon-edit-copy x-hidden-focus"></span> icon.
+* **API key** and **Endpoint URL**. The API key is used to start the container. You can retrieve the API key and Endpoint URL values by navigating to the Translator resource **Keys and Endpoint** page and selecting the `Copy to clipboard` <span class="docon docon-edit-copy x-hidden-focus"></span> icon.
> [!IMPORTANT] >
There are several ways to validate that the container is running:
#### English &leftrightarrow; German
-Navigate to the swagger page: `<http://localhost:5000/swagger/https://docsupdatetracker.net/index.html>`
+Navigate to the swagger page: `http://localhost:5000/swagger/https://docsupdatetracker.net/index.html`
1. Select **POST /translate** 1. Select **Try it out**
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/faq.md
files.
## I tried uploading my TMX, but it says "document processing failed" -
-Ensure that the TMX conforms to the TMX 1.4b Specification at
-<https://www.gala-global.org/tmx-14b>.
+Ensure that the TMX conforms to the [TMX 1.4b Specification](https://www.gala-global.org/tmx-14b).
cognitive-services Get Started With Document Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md
Previously updated : 02/02/2022 Last updated : 06/23/2022 recommendations: false ms.devlang: csharp, golang, java, javascript, python
The following headers are included with each Document Translator API request:
* The POST request URL is POST `https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/batches` * The POST request body is a JSON object named `inputs`.
-* The `inputs` object contains both `sourceURL` and `targetURL` container addresses for your source and target language pairs
-* The `prefix` and `suffix` fields (optional) are used to filter documents in the container including folders.
+* The `inputs` object contains both `sourceURL` and `targetURL` container addresses for your source and target language pairs.
+* The `prefix` and `suffix` are case-sensitive strings to filter documents in the source path for translation. The `prefix` field is often used to delineate subfolders for translation. The `suffix` field is most often used for file extensions.
* A value for the `glossaries` field (optional) is applied when the document is being translated. * The `targetUrl` for each target language must be unique.
The following headers are included with each Document Translator API request:
{ "source": { "sourceUrl": "https://myblob.blob.core.windows.net/source",
- "filter": {
- "prefix": "myfolder/"
- }
- },
+ },
"targets": [ { "targetUrl": "https://myblob.blob.core.windows.net/target",
Operation-Location | https://<<span>NAME-OF-YOUR-RESOURCE>.cognitiveservices.a
private static readonly string key = "<YOUR-KEY>";
- static readonly string json = ("{\"inputs\": [{\"source\": {\"sourceUrl\": \"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS\",\"storageSource\": \"AzureBlob\",\"language\": \"en\",\"filter\":{\"prefix\": \"Demo_1/\"} }, \"targets\": [{\"targetUrl\": \"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS\",\"storageSource\": \"AzureBlob\",\"category\": \"general\",\"language\": \"es\"}]}]}");
+ static readonly string json = ("{\"inputs\": [{\"source\": {\"sourceUrl\": \"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS\",\"storageSource\": \"AzureBlob\",\"language\": \"en\"}, \"targets\": [{\"targetUrl\": \"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS\",\"storageSource\": \"AzureBlob\",\"category\": \"general\",\"language\": \"es\"}]}]}");
static async Task Main(string[] args) {
let data = JSON.stringify({"inputs": [
"source": { "sourceUrl": "https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS", "storageSource": "AzureBlob",
- "language": "en",
- "filter":{
- "prefix": "Demo_1/"
+ "language": "en"
} }, "targets": [
payload= {
"source": { "sourceUrl": "https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS", "storageSource": "AzureBlob",
- "language": "en",
- "filter":{
- "prefix": "Demo_1/"
+ "language": "en"
} }, "targets": [
public class DocumentTranslation {
public void post() throws IOException { MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType, "{\n \"inputs\": [\n {\n \"source\": {\n \"sourceUrl\": \"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS\",\n \"filter\": {\n \"prefix\": \"Demo_1\"\n },\n \"language\": \"en\",\n \"storageSource\": \"AzureBlob\"\n },\n \"targets\": [\n {\n \"targetUrl\": \"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS\",\n \"category\": \"general\",\n\"language\": \"fr\",\n\"storageSource\": \"AzureBlob\"\n }\n ],\n \"storageType\": \"Folder\"\n }\n ]\n}");
+ RequestBody body = RequestBody.create(mediaType, "{\n \"inputs\": [\n {\n \"source\": {\n \"sourceUrl\": \"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS\",\n },\n \"language\": \"en\",\n \"storageSource\": \"AzureBlob\"\n },\n \"targets\": [\n {\n \"targetUrl\": \"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS\",\n \"category\": \"general\",\n\"language\": \"fr\",\n\"storageSource\": \"AzureBlob\"\n }\n ],\n \"storageType\": \"Folder\"\n }\n ]\n}");
Request request = new Request.Builder() .url(path).post(body) .addHeader("Ocp-Apim-Subscription-Key", key)
key := "<YOUR-KEY>"
uri := endpoint + "/batches" method := "POST"
-var jsonStr = []byte(`{"inputs":[{"source":{"sourceUrl":"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS","storageSource":"AzureBlob","language":"en","filter":{"prefix":"Demo_1/"}},"targets":[{"targetUrl":"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS","storageSource":"AzureBlob","category":"general","language":"es"}]}]}`)
+var jsonStr = []byte(`{"inputs":[{"source":{"sourceUrl":"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS","storageSource":"AzureBlob","language":"en"},"targets":[{"targetUrl":"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS","storageSource":"AzureBlob","category":"general","language":"es"}]}]}`)
req, err := http.NewRequest(method, endpoint, bytes.NewBuffer(jsonStr)) req.Header.Add("Ocp-Apim-Subscription-Key", key)
cognitive-services Previous Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/previous-updates.md
+
+ Title: Previous language service updates
+
+description: An archive of previous Azure Cognitive Service for Language updates.
++++++ Last updated : 06/23/2022++++
+# Previous updates for Azure Cognitive Service for Language
+
+This article contains a list of previously recorded updates for Azure Cognitive Service for Language. For more current service updates, see [What's new](../whats-new.md).
+
+## October 2021
+
+* Quality improvements for the extractive summarization feature in model-version `2021-08-01`.
+
+## September 2021
+
+* Starting with version `3.0.017010001-onprem-amd64` The text analytics for health container can now be called using the client library.
+
+## July 2021
+
+* General availability for text analytics for health containers and API.
+* General availability for opinion mining.
+* General availability for PII extraction and redaction.
+* General availability for asynchronous operation.
+
+## June 2021
+
+### General API updates
+
+* New model-version `2021-06-01` for key phrase extraction based on transformers. It provides:
+ * Support for 10 languages (Latin and CJK).
+ * Improved key phrase extraction.
+* The `2021-06-01` model version for Named Entity Recognition (NER) which provides
+ * Improved AI quality and expanded language support for the *Skill* entity category.
+ * Added Spanish, French, German, Italian and Portuguese language support for the *Skill* entity category
+
+### Text Analytics for health updates
+
+* A new model version `2021-05-15` for the `/health` endpoint and on-premise container which provides
+ * 5 new entity types: `ALLERGEN`, `CONDITION_SCALE`, `COURSE`, `EXPRESSION` and `MUTATION_TYPE`,
+ * 14 new relation types,
+ * Assertion detection expanded for new entity types and
+ * Linking support for `ALLERGEN` entity type
+* A new image for the Text Analytics for health container with tag `3.0.016230002-onprem-amd64` and model version `2021-05-15`. This container is available for download from Microsoft Container Registry.
+
+## May 2021
+
+* Custom question answering (previously QnA maker) can now be accessed using a Text Analytics resource.
+* Preview API release, including:
+ * Asynchronous API now supports sentiment analysis and opinion mining.
+ * A new query parameter, `LoggingOptOut`, is now available for customers who wish to opt out of logging input text for incident reports.
+* Text analytics for health and asynchronous operations are now available in all regions.
+
+## March 2021
+
+* Changes in the opinion mining JSON response body:
+ * `aspects` is now `targets` and `opinions` is now `assessments`.
+* Changes in the JSON response body of the hosted web API of text analytics for health:
+ * The `isNegated` boolean name of a detected entity object for negation is deprecated and replaced by assertion detection.
+ * A new property called `role` is now part of the extracted relation between an attribute and an entity as well as the relation between entities. This adds specificity to the detected relation type.
+* Entity linking is now available as an asynchronous task.
+* A new `pii-categories` parameter for the PII feature.
+ * This parameter lets you specify select PII entities, as well as those not supported by default for the input language.
+* Updated client libraries, which include asynchronous and text analytics for health operations.
+
+* A new model version `2021-03-01` for text analytics for health API and on-premise container which provides:
+ * A rename of the `Gene` entity type to `GeneOrProtein`.
+ * A new `Date` entity type.
+ * Assertion detection which replaces negation detection.
+ * A new preferred `name` property for linked entities that is normalized from various ontologies and coding systems.
+* A new text analytics for health container image with tag `3.0.015490002-onprem-amd64` and the new model-version `2021-03-01` has been released to the container preview repository.
+ * This container image will no longer be available for download from `containerpreview.azurecr.io` after April 26th, 2021.
+* **Processed Text Records** is now available as a metric in the **Monitoring** section for your text analytics resource in the Azure portal.
+
+## February 2021
+
+* The `2021-01-15` model version for the PII feature, which provides:
+ * Expanded support for 9 new languages
+ * Improved AI quality
+* The S0 through S4 pricing tiers are being retired on March 8th, 2021.
+* The language detection container is now generally available.
+
+## January 2021
+
+* The `2021-01-15` model version for Named Entity Recognition (NER), which provides
+ * Expanded language support.
+ * Improved AI quality of general entity categories for all supported languages.
+* The `2021-01-05` model version for language detection, which provides additional language support.
+
+## November 2020
+
+* Portuguese (Brazil) `pt-BR` is now supported in sentiment analysis, starting with model version `2020-04-01`. It adds to the existing `pt-PT` support for Portuguese.
+* Updated client libraries, which include asynchronous and text analytics for health operations.
+
+## October 2020
+
+* Hindi support for sentiment analysis, starting with model version `2020-04-01`.
+* Model version `2020-09-01` for language detection, which adds additional language support and accuracy improvements.
+
+## September 2020
+
+* PII now includes the new `redactedText` property in the response JSON where detected PII entities in the input text are replaced by an `*` for each character of those entities.
+* Entity linking endpoint now includes the `bingID` property in the response JSON for linked entities.
+* The following updates are specific to the September release of the text analytics for health container only.
+ * A new container image with tag `1.1.013530001-amd64-preview` with the new model-version `2020-09-03` has been released to the container preview repository.
+ * This model version provides improvements in entity recognition, abbreviation detection, and latency enhancements.
+
+## August 2020
+
+* Model version `2020-07-01` for key phrase extraction, PII detection, and language detection. This update adds:
+ * Additional government and country specific entity categories for Named Entity Recognition.
+ * Norwegian and Turkish support in Sentiment Analysis.
+* An HTTP 400 error will now be returned for API requests that exceed the published data limits.
+* Endpoints that return an offset now support the optional `stringIndexType` parameter, which adjusts the returned `offset` and `length` values to match a supported string index scheme.
+
+The following updates are specific to the August release of the Text Analytics for health container only.
+
+* New model-version for Text Analytics for health: `2020-07-24`
+
+The following properties in the JSON response have changed:
+
+* `type` has been renamed to `category`
+* `score` has been renamed to `confidenceScore`
+* Entities in the `category` field of the JSON output are now in pascal case. The following entities have been renamed:
+ * `EXAMINATION_RELATION` has been renamed to `RelationalOperator`.
+ * `EXAMINATION_UNIT` has been renamed to `MeasurementUnit`.
+ * `EXAMINATION_VALUE` has been renamed to `MeasurementValue`.
+ * `ROUTE_OR_MODE` has been renamed `MedicationRoute`.
+ * The relational entity `ROUTE_OR_MODE_OF_MEDICATION` has been renamed to `RouteOfMedication`.
+
+The following entities have been added:
+
+* Named Entity Recognition
+ * `AdministrativeEvent`
+ * `CareEnvironment`
+ * `HealthcareProfession`
+ * `MedicationForm`
+
+* Relation extraction
+ * `DirectionOfCondition`
+ * `DirectionOfExamination`
+ * `DirectionOfTreatment`
+
+## May 2020
+
+* Model version `2020-04-01`:
+ * Updated language support for sentiment analysis
+ * New "Address" entity category in Named Entity Recognition (NER)
+ * New subcategories in NER:
+ * Location - Geographical
+ * Location - Structural
+ * Organization - Stock Exchange
+ * Organization - Medical
+ * Organization - Sports
+ * Event - Cultural
+ * Event - Natural
+ * Event - Sports
+
+* The following properties in the JSON response have been added:
+ * `SentenceText` in sentiment analysis
+ * `Warnings` for each document
+
+* The names of the following properties in the JSON response have been changed, where applicable:
+
+* `score` has been renamed to `confidenceScore`
+ * `confidenceScore` has two decimal points of precision.
+* `type` has been renamed to `category`
+* `subtype` has been renamed to `subcategory`
+
+* New sentiment analysis feature - opinion mining
+* New personal (`PII`) domain filter for protected health information (`PHI`).
+
+## February 2020
+
+Additional entity types are now available in the Named Entity Recognition (NER). This update introduces model version `2020-02-01`, which includes:
+
+* Recognition of the following general entity types (English only):
+ * PersonType
+ * Product
+ * Event
+ * Geopolitical Entity (GPE) as a subtype under Location
+ * Skill
+
+* Recognition of the following personal information entity types (English only):
+ * Person
+ * Organization
+ * Age as a subtype under Quantity
+ * Date as a subtype under DateTime
+ * Email
+ * Phone Number (US only)
+ * URL
+ * IP Address
+
+### October 2019
+
+* Introduction of PII feature
+* Model version `2019-10-01`, which includes:
+ * Named entity recognition:
+ * Expanded detection and categorization of entities found in text.
+ * Recognition of the following new entity types:
+ * Phone number
+ * IP address
+ * Sentiment analysis:
+ * Significant improvements in the accuracy and detail of the API's text categorization and scoring.
+ * Automatic labeling for different sentiments in text.
+ * Sentiment analysis and output on a document and sentence level.
+
+ This model version supports: English (`en`), Japanese (`ja`), Chinese Simplified (`zh-Hans`), Chinese Traditional (`zh-Hant`), French (`fr`), Italian (`it`), Spanish (`es`), Dutch (`nl`), Portuguese (`pt`), and German (`de`).
+
+## Next steps
+
+See [What's new](../whats-new.md) for current service updates.
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/data-formats.md
CLU offers the option to upload your utterance directly to the project rather th
"intent": "{intent}", "entities": [ {
- "entityName": "{entity}",
+ "category": "{entity}",
"offset": 19, "length": 10 }
CLU offers the option to upload your utterance directly to the project rather th
"intent": "{intent}", "entities": [ {
- "entityName": "{entity}",
+ "category": "{entity}",
"offset": 20, "length": 10 }, {
- "entityName": "{entity}",
+ "category": "{entity}",
"offset": 31, "length": 5 }
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/glossary.md
Use this article to learn about some of the definitions and terms you may encoun
A class is a user-defined category that indicates the overall classification of the text. Developers label their data with their classes before they pass it to the model for training. ## F1 score
-The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall].
+The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
## Model
cognitive-services Active Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/active-learning.md
Once the import of the test file is complete, active learning suggestions can be
> [!div class="mx-imgBorder"] > [ ![Screenshot with review suggestions page displayed.]( ../media/active-learning/review-suggestions.png) ]( ../media/active-learning/review-suggestions.png#lightbox)
+> [!NOTE]
+> Active learning suggestions are not real time. There is an approximate delay of 30 minutes before the suggestions can show on this pane. This delay is to ensure that we balance the high cost involved for real time updates to the index and service performance.
+ We can now either accept these suggestions or reject them using the options on the menu bar to **Accept all suggestions** or **Reject all suggestions**. Alternatively, to accept or reject individual suggestions, select the checkmark (accept) symbol or trash can (reject) symbol that appears next to individual questions in the **Review suggestions** page.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
## Next steps
-* [What is Azure Cognitive Service for Language?](overview.md)
+* See the [previous updates](./concepts/previous-updates.md) article for service updates not listed here.
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/encrypt-data-at-rest.md
Title: Personalizer service encryption of data at rest
+ Title: Data-at-rest encryption in Personalizer
-description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Personalizer, and how to enable and manage CMK.
+description: Learn about the keys that you use for data-at-rest encryption in Personalizer. See how to use Azure Key Vault to configure customer-managed keys.
Previously updated : 08/28/2020 Last updated : 06/02/2022 + #Customer intent: As a user of the Personalizer service, I want to learn how encryption at rest works.
-# Personalizer service encryption of data at rest
+# Encryption of data at rest in Personalizer
-The Personalizer service automatically encrypts your data when persisted it to the cloud. The Personalizer service encryption protects your data and to help you to meet your organizational security and compliance commitments.
+Personalizer is a service in Azure Cognitive Services that uses a machine learning model to provide apps with user-tailored content. When Personalizer persists data to the cloud, it encrypts that data. This encryption protects your data and helps you meet organizational security and compliance commitments.
[!INCLUDE [cognitive-services-about-encryption](../includes/cognitive-services-about-encryption.md)] > [!IMPORTANT]
-> Customer-managed keys are only available on the E0 pricing tier. To request the ability to use customer-managed keys, fill out and submit the [Personalizer Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Personalizer service, you will need to create a new Personalizer resource and select E0 as the Pricing Tier. Once your Personalizer resource with the E0 pricing tier is created, you can use Azure Key Vault to set up your managed identity.
+> Customer-managed keys are only available with the E0 pricing tier. To request the ability to use customer-managed keys, fill out and submit the [Personalizer Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It takes approximately 3-5 business days to hear back about the status of your request. If demand is high, you might be placed in a queue and approved when space becomes available.
+>
+> After you're approved to use customer-managed keys with Personalizer, create a new Personalizer resource and select E0 as the pricing tier. After you've created that resource, you can use Azure Key Vault to set up your managed identity.
[!INCLUDE [cognitive-services-cmk](../includes/configure-customer-managed-keys.md)]
cognitive-services Responsible Data And Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-data-and-privacy.md
Also see:
- [See Responsible use guidelines for Personalizer](responsible-use-cases.md).
-To learn more about Microsoft's privacy and security commitments, see the[Microsoft Trust Center](https://www.microsoft.com/trust-center).
+To learn more about Microsoft's privacy and security commitments, see the [Microsoft Trust Center](https://www.microsoft.com/trust-center).
communication-services Events Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/events-playbook.md
The goal of this document is to reduce the time it takes for Event Management Pl
## What are virtual events and event management platforms?
-Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](/microsoftteams/quick-start-meetings-live-events), [Graph](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta) and [Azure Communication Services](../overview.md). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about[ Teams Meetings, Webinars and Live Events](/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios.
+Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](/microsoftteams/quick-start-meetings-live-events), [Graph](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta) and [Azure Communication Services](../overview.md). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about [Teams Meetings, Webinars and Live Events](/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios.
## What are the building blocks of an event management platform?
confidential-computing Use Cases Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/use-cases-scenarios.md
Confidential computing can expand the number of workloads eligible for public cl
### BYOK (Bring Your Own Key) scenarios
-The adoption of hardware secure modules (HSM) enables secure transfer of keys and certificates to a protected cloud storage - [Azure Key Vault Managed HSM](\..\key-vault\managed-hsm\overview.md) ΓÇô without allowing the cloud service provider to access such sensitive information. Secrets being transferred never exist outside an HSM in plaintext form, enabling scenarios for sovereignty of keys and certificates that are client generated and managed, but still using a cloud-based secure storage.
+The adoption of hardware secure modules (HSM) enables secure transfer of keys and certificates to a protected cloud storage - [Azure Key Vault Managed HSM](../key-vault/managed-hsm/overview.md) ΓÇô without allowing the cloud service provider to access such sensitive information. Secrets being transferred never exist outside an HSM in plaintext form, enabling scenarios for sovereignty of keys and certificates that are client generated and managed, but still using a cloud-based secure storage.
## Secure blockchain
container-apps Authentication Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-github.md
To complete the procedure in this article, you need a GitHub account. To create
1. If you're configuring the first identity provider for this application, you'll also be prompted with a **Container Apps authentication settings** section. Otherwise, you may move on to the next step.
- These options determine how your application responds to unauthenticated requests. The default selections redirect all requests to sign in with this new provider. You can change customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow]([Authentication flow](authentication.md#authentication-flow)).
+ These options determine how your application responds to unauthenticated requests. The default selections redirect all requests to sign in with this new provider. You can change customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow](./authentication.md#authentication-flow).
1. Select **Add**.
container-apps Background Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/background-processing.md
Create a file named *queue.json* and paste the following configuration code into
}, { "name": "QueueConnectionString",
- "secretref": "queueconnection"
+ "secretRef": "queueconnection"
} ] }
container-apps Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/manage-secrets.md
Here, a connection string to a queue storage account is declared in the `--secre
## Using secrets
-Application secrets are referenced via the `secretref` property. Secret values are mapped to application-level secrets where the `secretref` value matches the secret name declared at the application level.
+The secret value is mapped to the secret name declared at the application level as described in the [defining secrets](#defining-secrets) section. The `passwordSecretRef` and `secretRef` parameters are used to reference the secret names as environment variables at the container level. The `passwordSecretRef` provides a descriptive parameter name for secrets containing passwords.
## Example
-The following example shows an application that declares a connection string at the application level and is used throughout the configuration via `secretref`.
+The following example shows an application that declares a connection string at the application level and is used throughout the configuration via `secretRef`.
# [ARM template](#tab/arm-template)
az containerapp create \
--environment "my-environment-name" \ --image demos/myQueueApp:v1 \ --secrets "queue-connection-string=$CONNECTIONSTRING" \
- --env-vars "QueueName=myqueue" "ConnectionString=secretref:queue-connection-string"
+ --env-vars "QueueName=myqueue" "ConnectionString=secretRef:queue-connection-string"
```
-Here, the environment variable named `connection-string` gets its value from the application-level `queue-connection-string` secret by using `secretref`.
+Here, the environment variable named `connection-string` gets its value from the application-level `queue-connection-string` secret by using `secretRef`.
# [PowerShell](#tab/powershell)
az containerapp create `
--environment "my-environment-name" ` --image demos/myQueueApp:v1 ` --secrets "queue-connection-string=$CONNECTIONSTRING" `
- --env-vars "QueueName=myqueue" "ConnectionString=secretref:queue-connection-string"
+ --env-vars "QueueName=myqueue" "ConnectionString=secretRef:queue-connection-string"
```
-Here, the environment variable named `connection-string` gets its value from the application-level `queue-connection-string` secret by using `secretref`.
+Here, the environment variable named `connection-string` gets its value from the application-level `queue-connection-string` secret by using `secretRef`.
container-apps Service Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/service-connector.md
+
+ Title: Connect a container app to a cloud service with Service Connector
+description: Learn to connect a container app to an Azure service using the Azure portal or the CLI.
++++ Last updated : 06/16/2022
+# Customer intent: As an app developer, I want to connect a containerized app to a storage account in the Azure portal using Service Connector.
++
+# How to connect a Container Apps instance to a backing service
+
+Azure Container Apps allows you to use Service Connector to connect to cloud services in just a few steps. Service Connector manages the configuration of the network settings and connection information between different services. To view all supported services, [learn more about Service Connector](../service-connector/overview.md#what-services-are-supported-in-service-connector).
+
+In this article, you learn to connect a container app to Azure Blob Storage.
+
+> [!IMPORTANT]
+> This feature in Container Apps is currently in preview.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- An application deployed to Container Apps in a [region supported by Service Connector](../service-connector/concept-region-support.md). If you don't have one yet, [create and deploy a container to Container Apps](quickstart-portal.md)
+- An Azure Blob Storage account
+
+## Sign in to Azure
+
+First, sign in to Azure.
+
+### [Portal](#tab/azure-portal)
+
+Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
+
+### [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az login
+```
+
+This command prompts your web browser to launch and load an Azure sign in page. If the browser fails to open, use device code flow with `az login --use-device-code`. For more sign in options, go to [sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+++
+## Create a new service connection
+
+Use Service Connector to create a new service connection in Container Apps using the Azure portal or the CLI.
+
+### [Portal](#tab/azure-portal)
+
+1. Navigate to the Azure portal.
+1. Select **All resources** on the left of the Azure portal.
+1. Enter **Container Apps** in the filter and select the name of the container app you want to use in the list.
+1. Select **Service Connector** from the left table of contents.
+1. Select **Create**.
+
+ :::image type="content" source="media/service-connector/connect-service-connector.png" alt-text="Screenshot of the Azure portal, selecting Service Connector within a container app." lightbox="media/service-connector/connect-service-connector-expanded.png":::
+
+1. Select or enter the following settings.
+
+ | Setting | Suggested value | Description |
+ | | | |
+ | **Container** | Your container name | Select your Container Apps. |
+ | **Service type** | Blob Storage | This is the target service type. If you don't have a Storage Blob container, you can [create one](../storage/blobs/storage-quickstart-blobs-portal.md) or use another service type. |
+ | **Subscription** | One of your subscriptions | The subscription containing your target service. The default value is the subscription for your container app. |
+ | **Connection name** | Generated unique name | The connection name that identifies the connection between your container app and target service. |
+ | **Storage account** | Your storage account name | The target storage account to which you want to connect. If you choose a different service type, select the corresponding target service instance. |
+ | **Client type** | The app stack in your selected container | Your application stack that works with the target service you selected. The default value is **none**, which generates a list of configurations. If you know about the app stack or the client SDK in the container you selected, select the same app stack for the client type. |
+
+1. Select **Next: Authentication** to select the authentication type. Then select **Connection string** to use access key to connect your Blob Storage account.
+
+1. Select **Next: Network** to select the network configuration. Then select **Enable firewall settings** to update firewall allowlist in Blob Storage so that your container apps can reach the Blob Storage.
+
+1. Then select **Next: Review + Create** to review the provided information. Running the final validation takes a few seconds. Then select **Create** to create the service connection. It might take a minute or so to complete the operation.
+
+### [Azure CLI](#tab/azure-cli)
+
+The following steps create a service connection using an access key or a system-assigned managed identity.
+
+1. Use the Azure CLI command `az containerapp connection list-support-types` to view all supported target services.
+
+ ```azurecli-interactive
+ az provider register -n Microsoft.ServiceLinker
+ az containerapp connection list-support-types --output table
+ ```
+
+1. Use the Azure CLI command `az containerapp connection connection create` to create a service connection from a container app.
+
+ If you're connecting with an access key, run the code below:
+
+ ```azurecli-interactive
+ az containerapp connection create storage-blob --secret
+ ```
+
+ If you're connecting with a system-assigned managed identity, run the code below:
+
+ ```azurecli-interactive
+ az containerapp connection create storage-blob --system-identity
+ ```
+
+1. Provide the following information at the Azure CLI's request:
+
+ - **The resource group which contains the container app**: the name of the resource group with the container app.
+ - **Name of the container app**: the name of your container app.
+ - **The container where the connection information will be saved**: the name of the container, in your container app, that connects to the target service
+ - **The resource group which contains the storage account:** the name of the resource group name with the storage account. In this guide, we're using a Blob Storage.
+ - **Name of the storage account**: the name of the storage account that contains your blob.
+
+ > [!IMPORTANT]
+ > To use Managed Identity, you must have the permission to manage [Azure Active Directory role assignments](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). If you don't have this permission, you won't be able to create a connection. You can ask your subscription owner to grant you this permission or use an access key instead to create the connection.
+
+ > [!NOTE]
+ > If you don't have a Blob Storage, you can run `az containerapp connection create storage-blob --new --secret` to provision a new one.
+++
+## View service connections in Container Apps
+
+View your existing service connections using the Azure portal or the CLI.
+
+### [Portal](#tab/azure-portal)
+
+1. In **Service Connector**, select **Refresh** and you'll see a Container Apps connection displayed.
+
+1. Select **>** to expand the list. You can see the environment variables required by your application code.
+
+1. Select **...** and then **Validate**. You can see the connection validation details in the pop-up panel on the right.
+
+ :::image type="content" source="media/service-connector/connect-service-connector-refresh.png" alt-text="Screenshot of the Azure portal, viewing connection validation details.":::
+
+### [Azure CLI](#tab/azure-cli)
+
+Use the Azure CLI command `az containerapp connection list` to list all your container app's provisioned connections. Provide the following information:
+
+- **Source compute service resource group name**: the resource group name of the container app.
+- **Container app name**: the name of your container app.
+
+```azurecli-interactive
+az containerapp connection list -g "<your-container-app-resource-group>" --name "<your-container-app-name>" --output table
+```
+
+The output also displays the provisioning state of your connections: failed or succeeded.
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Environments in Azure Container Apps](environment.md)
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
az network vnet subnet create `
-With the VNET established, you can now query for the VNET and infrastructure subnet ID.
+With the VNET established, you can now query for the infrastructure subnet ID.
# [Bash](#tab/bash)
-```bash
-VNET_RESOURCE_ID=`az network vnet show --resource-group ${RESOURCE_GROUP} --name ${VNET_NAME} --query "id" -o tsv | tr -d '[:space:]'`
-```
- ```bash INFRASTRUCTURE_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv | tr -d '[:space:]'` ``` # [PowerShell](#tab/powershell)
-```powershell
-$VNET_RESOURCE_ID=(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
-```
- ```powershell $INFRASTRUCTURE_SUBNET=(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv) ```
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
az network vnet subnet create `
-With the virtual network created, you can retrieve the IDs for both the VNET and the infrastructure subnet.
+With the virtual network created, you can retrieve the ID for the infrastructure subnet.
# [Bash](#tab/bash)
-```bash
-VNET_RESOURCE_ID=`az network vnet show --resource-group ${RESOURCE_GROUP} --name ${VNET_NAME} --query "id" -o tsv | tr -d '[:space:]'`
-```
- ```bash INFRASTRUCTURE_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv | tr -d '[:space:]'` ``` # [PowerShell](#tab/powershell)
-```powershell
-$VNET_RESOURCE_ID=(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
-```
- ```powershell $INFRASTRUCTURE_SUBNET=(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv) ```
container-registry Container Registry Helm Repos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-helm-repos.md
To quickly manage and deploy applications for Kubernetes, you can use the [open-
This article shows you how to host Helm charts repositories in an Azure container registry, using Helm 3 commands and storing charts as [OCI artifacts](container-registry-image-formats.md#oci-artifacts). In many scenarios, you would build and upload your own charts for the applications you develop. For more information on how to build your own Helm charts, see the [Chart Template Developer's Guide][develop-helm-charts]. You can also store an existing Helm chart from another Helm repo. > [!IMPORTANT]
-> This article has been updated with Helm 3 commands as of version **3.7.1**. Helm 3.7.1 includes changes to Helm CLI commands and OCI support introduced in earlier versions of Helm 3.
+> This article has been updated with Helm 3 commands. Helm 3.7 includes changes to Helm CLI commands and OCI support introduced in earlier versions of Helm 3. By design `helm` moves forward with version. We recommend to use **3.7.2** or later.
## Helm 3 or Helm 2?
If you've previously stored and deployed charts using Helm 2 and Azure Container
The following resources are needed for the scenario in this article: - **An Azure container registry** in your Azure subscription. If needed, create a registry using the [Azure portal](container-registry-get-started-portal.md) or the [Azure CLI](container-registry-get-started-azure-cli.md).-- **Helm client version 3.7.1 or later** - Run `helm version` to find your current version. For more information on how to install and upgrade Helm, see [Installing Helm][helm-install]. If you upgrade from an earlier version of Helm 3, review the [release notes](https://github.com/helm/helm/releases).
+- **Helm client version 3.7 or later** - Run `helm version` to find your current version. For more information on how to install and upgrade Helm, see [Installing Helm][helm-install]. If you upgrade from an earlier version of Helm 3, review the [release notes](https://github.com/helm/helm/releases).
- **A Kubernetes cluster** where you will install a Helm chart. If needed, create an AKS cluster [using the Azure CLI][./learn/quick-kubernetes-deploy-cli], [using Azure PowerShell][./learn/quick-kubernetes-deploy-powershell], or [using the Azure portal][./learn/quick-kubernetes-deploy-portal]. - **Azure CLI version 2.0.71 or later** - Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
Output is similar to:
Run the [az acr repository show-manifests][az-acr-repository-show-manifests] command to see details of the chart stored in the repository. For example: ```azurecli
-az acr repository show-manifests \
- --name $ACR_NAME \
- --repository helm/hello-world --detail
+az acr manifest list-metadata \
+ --registry $ACR_NAME \
+ --name helm/hello-world --detail
``` Output, abbreviated in this example, shows a `configMediaType` of `application/vnd.cncf.helm.config.v1+json`:
cosmos-db Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/data-residency.md
# How to meet data residency requirements in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-In Azure Cosmos DB, you can configure your data and backups to remain in a single region to meet the[ residency requirements.](https://azure.microsoft.com/global-infrastructure/data-residency/)
+In Azure Cosmos DB, you can configure your data and backups to remain in a single region to meet the [residency requirements](https://azure.microsoft.com/global-infrastructure/data-residency/).
## Residency requirements for data
cosmos-db Managed Identity Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/managed-identity-based-authentication.md
You'll learn how to create a function app that can access Azure Cosmos DB data w
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - An existing Azure Cosmos DB SQL API account. [Create an Azure Cosmos DB SQL API account](sql/create-cosmosdb-resources-portal.md) - An existing Azure Functions function app. [Create your first function in the Azure portal](../azure-functions/functions-create-function-app-portal.md)
- - A system-assigned managed identity for the function app. [Add a system-assigned identity](/app-service/overview-managed-identity.md?tabs=cli#add-a-system-assigned-identity)
+ - A system-assigned managed identity for the function app. [Add a system-assigned identity](../app-service/overview-managed-identity.md#add-a-system-assigned-identity)
- [Azure Functions Core Tools](../azure-functions/functions-run-local.md) - To perform the steps in this article, install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in to Azure](/cli/azure/authenticate-azure-cli).
You'll learn how to create a function app that can access Azure Cosmos DB data w
> [!NOTE] > These variables will be re-used in later steps. This example assumes your Azure Cosmos DB account name is ``msdocs-cosmos-app``, your function app name is ``msdocs-function-app`` and your resource group name is ``msdocs-cosmos-functions-dotnet-identity``.
-1. View the function app's properties using the [``az functionapp show``](/cli/azure/functionapp&preserve-view=true#az-functionapp-show) command.
+1. View the function app's properties using the [``az functionapp show``](/cli/azure/functionapp#az-functionapp-show) command.
```azurecli-interactive az functionapp show \
cosmos-db How To Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-get-started.md
+
+ Title: Get started with Azure Cosmos DB MongoDB API and JavaScript
+description: Get started developing a JavaScript application that works with Azure Cosmos DB MongoDB API. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB MongoDB API database.
++++
+ms.devlang: javascript
+ Last updated : 06/23/2022++++
+# Get started with Azure Cosmos DB MongoDB API and JavaScript
+
+This article shows you how to connect to Azure Cosmos DB MongoDB API using the native MongoDB npm package. Once connected, you can perform operations on databases, collections, and docs.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
+
+[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
++
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* [Node.js LTS](https://nodejs.org/en/download/)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+* [Azure Cosmos DB MongoDB API resource](quickstart-javascript.md#create-an-azure-cosmos-db-account)
+
+## Create a new JavaScript app
+
+1. Create a new JavaScript application in an empty folder using your preferred terminal. Use the [``npm init``](https://docs.npmjs.com/cli/v8/commands/npm-init) command to begin the prompts to create the `package.json` file. Accept the defaults for the prompts.
+
+ ```console
+ npm init
+ ```
+
+2. Add the [MongoDB](https://www.npmjs.com/package/mongodb) npm package to the JavaScript project. Use the [``npm install package``](https://docs.npmjs.com/cli/v8/commands/npm-install) command specifying the name of the npm package. The `dotenv` package is used to read the environment variables from a `.env` file during local development.
+
+ ```console
+ npm install mongodb dotenv
+ ```
+
+3. To run the app, use a terminal to navigate to the application directory and run the application.
+
+ ```console
+ node index.js
+ ```
+
+## Connect with MongoDB native driver to Azure Cosmos DB MongoDB API
+
+To connect with the MongoDB native driver to Azure Cosmos DB, create an instance of the [``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#connect) class. This class is the starting point to perform all operations against databases.
+
+The most common constructor for **MongoClient** has two parameters:
+
+| Parameter | Example value | Description |
+| | | |
+| ``url`` | ``COSMOS_CONNECTION_STRIN`` environment variable | MongoDB API connection string to use for all requests |
+| ``options`` | `{ssl: true, tls: true, }` | [MongoDB Options](https://mongodb.github.io/node-mongodb-native/4.5/interfaces/MongoClientOptions.html) for the connection. |
+
+Refer to the [Troubleshooting guide](error-codes-solutions.md) for connection issues.
+
+## Get resource name
+
+### [Azure CLI](#tab/azure-cli)
++
+### [PowerShell](#tab/azure-powershell)
++
+### [Portal](#tab/azure-portal)
+
+Skip this step and use the information for the portal in the next step.
+++
+## Retrieve your connection string
+
+### [Azure CLI](#tab/azure-cli)
++
+### [PowerShell](#tab/azure-powershell)
++
+### [Portal](#tab/azure-portal)
+
+> [!TIP]
+> For this guide, we recommend using the resource group name ``msdocs-cosmos``.
++++
+## Configure environment variables
++
+## Create MongoClient with connection string
++
+1. Add dependencies to reference the MongoDB and DotEnv npm packages.
+
+ :::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/101-client-connection-string/index.js" id="package_dependencies":::
+
+2. Define a new instance of the ``MongoClient,`` class using the constructor, and [``process.env.``](https://nodejs.org/dist/latest-v8.x/docs/api/process.html#process_process_env) to use the connection string.
+
+ :::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/101-client-connection-string/index.js" id="client_credentials":::
+
+For more information on different ways to create a ``MongoClient`` instance, see [MongoDB NodeJS Driver Quick Start](https://www.npmjs.com/package/mongodb#quick-start).
+
+## Close the MongoClient connection
+
+When your application is finished with the connection remember to close it. That `.close()` call should be after all database calls are made.
+
+```javascript
+client.close()
+```
+
+## Use MongoDB client classes with Cosmos DB for MongoDB API
++
+Each type of resource is represented by one or more associated JavaScript classes. Here's a list of the most common classes:
+
+| Class | Description |
+|||
+|[``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html)|This class provides a client-side logical representation for the MongoDB API layer on Cosmos DB. The client object is used to configure and execute requests against the service.|
+|[``Db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html)|This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.|
+|[``Collection``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html)|This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.|
+
+The following guides show you how to use each of these classes to build your application.
+
+**Guide**:
+
+* [Manage databases](how-to-javascript-manage-databases.md)
+* [Manage collections](how-to-javascript-manage-collections.md)
+* [Manage documents](how-to-javascript-manage-documents.md)
+* [Use queries to find documents](how-to-javascript-manage-queries.md)
+
+## See also
+
+- [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos)
+- [API reference](https://docs.mongodb.com/drivers/node)
+
+## Next steps
+
+Now that you've connected to a MongoDB API account, use the next guide to create and manage databases.
+
+> [!div class="nextstepaction"]
+> [Create a database in Azure Cosmos DB MongoDB API using JavaScript](how-to-javascript-manage-databases.md)
cosmos-db How To Javascript Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-collections.md
+
+ Title: Create a collection in Azure Cosmos DB MongoDB API using JavaScript
+description: Learn how to work with a collection in your Azure Cosmos DB MongoDB API database using the JavaScript SDK.
++++
+ms.devlang: javascript
+ Last updated : 06/23/2022+++
+# Manage a collection in Azure Cosmos DB MongoDB API using JavaScript
++
+Manage your MongoDB collection stored in Cosmos DB with the native MongoDB client driver.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
+
+[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
++
+## Name a collection
+
+In Azure Cosmos DB, a collection is analogous to a table in a relational database. When you create a collection, the collection name forms a segment of the URI used to access the collection resource and any child docs.
+
+Here are some quick rules when naming a collection:
+
+* Keep collection names between 3 and 63 characters long
+* Collection names can only contain lowercase letters, numbers, or the dash (-) character.
+* Container names must start with a lowercase letter or number.
+
+## Get collection instance
+
+Use an instance of the **Collection** class to access the collection on the server.
+
+* [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
+
+The following code snippets assume you've already created your [client connection](how-to-javascript-get-started.md#create-mongoclient-with-connection-string) and that you [close your client connection](how-to-javascript-get-started.md#close-the-mongoclient-connection) after these code snippets.
+
+## Create a collection
+
+To create a collection, insert a document into the collection.
+
+* [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html#collection)
+* [MongoClient.Db.Collection.insertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)
+* [MongoClient.Db.Collection.insertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
++
+## Drop a collection
+
+* [MongoClient.Db.dropCollection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropCollection)
+
+Drop the collection from the database to remove it permanently. However, the next insert or update operation that accesses the collection will create a new collection with that name.
++
+The preceding code snippet displays the following example console output:
++
+## Get collection indexes
+
+An index is used by the MongoDB query engine to improve performance to database queries.
+
+* [MongoClient.Db.Collection.indexes](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#indexes)
++
+The preceding code snippet displays the following example console output:
++
+## See also
+
+- [Get started with Azure Cosmos DB MongoDB API and JavaScript](how-to-javascript-get-started.md)
+- [Create a database](how-to-javascript-manage-databases.md)
cosmos-db How To Javascript Manage Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-databases.md
+
+ Title: Manage a MongoDB database using JavaScript
+description: Learn how to manage your Cosmos DB resource when it provides the MongoDB API with a JavaScript SDK.
++++
+ms.devlang: javascript
+ Last updated : 06/23/2022+++
+# Manage a MongoDB database using JavaScript
++
+Your MongoDB server in Azure Cosmos DB is available from the common npm packages for MongoDB such as:
+
+* [MongoDB](https://www.npmjs.com/package/mongodb)
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
+
+[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
+
+## Name a database
+
+In Azure Cosmos DB, a database is analogous to a namespace. When you create a database, the database name forms a segment of the URI used to access the database resource and any child resources.
+
+Here are some quick rules when naming a database:
+
+* Keep database names between 3 and 63 characters long
+* Database names can only contain lowercase letters, numbers, or the dash (-) character.
+* Database names must start with a lowercase letter or number.
+
+Once created, the URI for a database is in this format:
+
+``https://<cosmos-account-name>.documents.azure.com/dbs/<database-name>``
+
+## Get database instance
+
+The database holds the collections and their documents. Use an instance of the **Db** class to access the databases on the server.
+
+* [MongoClient.Db](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
+
+The following code snippets assume you've already created your [client connection](how-to-javascript-get-started.md#create-mongoclient-with-connection-string) and that you [close your client connection](how-to-javascript-get-started.md#close-the-mongoclient-connection) after these code snippets.
+
+## Get server information
+
+Access the **Admin** class to retrieve server information. You don't need to specify the database name in the `db` method. The information returned is specific to MongoDB and doesn't represent the Azure Cosmos DB platform itself.
+
+* [MongoClient.Db.Admin](https://mongodb.github.io/node-mongodb-native/4.7/classes/Admin.html)
++
+The preceding code snippet displays the following example console output:
++
+## Does database exist?
+
+The native MongoDB driver for JavaScript creates the database if it doesn't exist when you access it. If you would prefer to know if the database already exists before using it, get the list of current databases and filter for the name:
+
+* [MongoClient.Db.Admin.listDatabases](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
++
+The preceding code snippet displays the following example console output:
++
+## Get list of databases, collections, and document count
+
+When you manage your MongoDB server programmatically, it's helpful to know what databases and collections are on the server and how many documents in each collection.
+
+* [MongoClient.Db.Admin.listDatabases](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
+* [MongoClient.Db.listCollections](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#listCollections)
+* [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
+* [MongoClient.Db.Collection.countDocuments](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#countDocuments)
++
+The preceding code snippet displays the following example console output:
++
+## Get database object instance
+
+To get a database object instance, call the following method. This method accepts an optional database name and can be part of a chain.
+
+* [``MongoClient.Db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html)
+
+A database is created when it is accessed. The most common way to access a new database is to add a document to a collection. In one line of code using chained objects, the database, collection, and doc are created.
+
+```javascript
+const insertOneResult = await client.db("adventureworks").collection("products").insertOne(doc);
+```
+
+Learn more about working with [collections](how-to-javascript-manage-collections.md) and documents.
+
+## Drop a database
+
+A database is removed from the server using the dropDatabase method on the DB class.
+
+* [DB.dropDatabase](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropDatabase)
++
+The preceding code snippet displays the following example console output:
++
+## See also
+
+- [Get started with Azure Cosmos DB MongoDB API and JavaScript](how-to-javascript-get-started.md)
+- Work with a collection](how-to-javascript-manage-collections.md)
cosmos-db How To Javascript Manage Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-documents.md
+
+ Title: Create a document in Azure Cosmos DB MongoDB API using JavaScript
+description: Learn how to work with a document in your Azure Cosmos DB MongoDB API database using the JavaScript SDK.
++++
+ms.devlang: javascript
+ Last updated : 06/23/2022+++
+# Manage a document in Azure Cosmos DB MongoDB API using JavaScript
++
+Manage your MongoDB documents with the ability to insert, update, and delete documents.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
+
+[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
+
+## Insert a document
+
+Insert a document, defined with a JSON schema, into your collection.
+
+* [MongoClient.Db.Collection.insertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)
+* [MongoClient.Db.Collection.insertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
++
+The preceding code snippet displays the following example console output:
++
+## Document ID
+
+If you don't provide an ID, `_id`, for your document, one is created for you as a BSON object. The value of the provided ID is accessed with the ObjectId method.
+
+* [ObjectId](https://mongodb.github.io/node-mongodb-native/4.7/classes/ObjectId.html)
+
+Use the ID to query for documents:
+
+```javascript
+const query = { _id: ObjectId("62b1f43a9446918500c875c5")};
+```
+
+## Update a document
+
+To update a document, specify the query used to find the document along with a set of properties of the document that should be updated. You can choose to upsert the document, which inserts the document if it doesn't already exist.
+
+* [MongoClient.Db.Collection.updateOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#updateOne)
+* [MongoClient.Db.Collection.updateMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#updateMany)
+++
+The preceding code snippet displays the following example console output for an insert:
++
+The preceding code snippet displays the following example console output for an update:
++
+## Bulk updates to a collection
+
+You can perform several operations at once with the **bulkWrite** operation. Learn more about how to [optimize bulk writes for Cosmos DB](optimize-write-performance.md#tune-for-the-optimal-batch-size-and-thread-count).
+
+The following bulk operations are available:
+
+* [MongoClient.Db.Collection.bulkWrite](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#bulkWrite)
+
+ * insertOne
+ * updateOne
+ * updateMany
+ * deleteOne
+ * deleteMany
++
+The preceding code snippet displays the following example console output:
++
+## Delete a document
+
+To delete documents, use a query to define how the documents are found.
+
+* [MongoClient.Db.Collection.deleteOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#deleteOne)
+* [MongoClient.Db.Collection.deleteMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#deleteMany)
++
+The preceding code snippet displays the following example console output:
++
+## See also
+
+- [Get started with Azure Cosmos DB MongoDB API and JavaScript](how-to-javascript-get-started.md)
+- [Create a database](how-to-javascript-manage-databases.md)
cosmos-db How To Javascript Manage Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-queries.md
+
+ Title: Use a query in Azure Cosmos DB MongoDB API using JavaScript
+description: Learn how to use a query in your Azure Cosmos DB MongoDB API database using the JavaScript SDK.
++++
+ms.devlang: javascript
+ Last updated : 06/23/2022+++
+# Use a query in Azure Cosmos DB MongoDB API using JavaScript
++
+Use queries to find documents in a collection.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
+
+[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
++
+## Query for documents
+
+To find documents, use a query to define how the documents are found.
+
+* [MongoClient.Db.Collection.findOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#findOne)
+* [MongoClient.Db.Collection.find](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#find)
+* [FindCursor](https://mongodb.github.io/node-mongodb-native/4.7/classes/FindCursor.html)
++
+The preceding code snippet displays the following example console output:
++
+## See also
+
+- [Get started with Azure Cosmos DB MongoDB API and JavaScript](how-to-javascript-get-started.md)
+- [Create a database](how-to-javascript-manage-databases.md)
cosmos-db Quickstart Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-javascript.md
This quickstart will create a single Azure Cosmos DB account using the MongoDB A
#### [Azure CLI](#tab/azure-cli)
-1. Create shell variables for *accountName*, *resourceGroupName*, and *location*.
-
- ```azurecli-interactive
- # Variable for resource group name
- resourceGroupName="msdocs-cosmos-javascript-quickstart-rg"
- location="westus"
-
- # Variable for account name with a randomly generated suffix
- let suffix=$RANDOM*$RANDOM
- accountName="msdocs-javascript-$suffix"
- ```
-
-1. If you haven't already, sign in to the Azure CLI using the [``az login``](/cli/azure/reference-index#az-login) command.
-
-1. Use the [``az group create``](/cli/azure/group#az-group-create) command to create a new resource group in your subscription.
-
- ```azurecli-interactive
- az group create \
- --name $resourceGroupName \
- --location $location
- ```
-
-1. Use the [``az cosmosdb create``](/cli/azure/cosmosdb#az-cosmosdb-create) command to create a new Azure Cosmos DB MongoDB API account with default settings.
-
- ```azurecli-interactive
- az cosmosdb create \
- --resource-group $resourceGroupName \
- --name $accountName \
- --locations regionName=$location
- --kind MongoDB
- ```
-
-1. Find the MongoDB API **connection string** from the list of connection strings for the account with the[``az cosmosdb list-connection-strings``](/cli/azure/cosmosdb#az-cosmosdb-list-connection-strings) command.
-
- ```azurecli-interactive
- az cosmosdb list-connection-strings \
- --resource-group $resourceGroupName \
- --name $accountName
- ```
-
-1. Record the *PRIMARY KEY* values. You'll use these credentials later.
#### [PowerShell](#tab/azure-powershell)
-1. Create shell variables for *ACCOUNT_NAME*, *RESOURCE_GROUP_NAME*, and **LOCATION**.
-
- ```azurepowershell-interactive
- # Variable for resource group name
- $RESOURCE_GROUP_NAME = "msdocs-cosmos-javascript-quickstart-rg"
- $LOCATION = "West US"
-
- # Variable for account name with a randomnly generated suffix
- $SUFFIX = Get-Random
- $ACCOUNT_NAME = "msdocs-javascript-$SUFFIX"
- ```
-
-1. If you haven't already, sign in to Azure PowerShell using the [``Connect-AzAccount``](/powershell/module/az.accounts/connect-azaccount) cmdlet.
-
-1. Use the [``New-AzResourceGroup``](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a new resource group in your subscription.
-
- ```azurepowershell-interactive
- $parameters = @{
- Name = $RESOURCE_GROUP_NAME
- Location = $LOCATION
- }
- New-AzResourceGroup @parameters
- ```
-
-1. Use the [``New-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet to create a new Azure Cosmos DB MongoDB API account with default settings.
-
- ```azurepowershell-interactive
- $parameters = @{
- ResourceGroupName = $RESOURCE_GROUP_NAME
- Name = $ACCOUNT_NAME
- Location = $LOCATION
- Kind = "MongoDB"
- }
- New-AzCosmosDBAccount @parameters
- ```
-
-1. Find the *CONNECTION STRING* from the list of keys and connection strings for the account with the [``Get-AzCosmosDBAccountKey``](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
-
- ```azurepowershell-interactive
- $parameters = @{
- ResourceGroupName = $RESOURCE_GROUP_NAME
- Name = $ACCOUNT_NAME
- Type = "ConnectionStrings"
- }
- Get-AzCosmosDBAccountKey @parameters |
- Select-Object -Property "Primary MongoDB Connection String"
- ```
-
-1. Record the *CONNECTION STRING* value. You'll use these credentials later.
#### [Portal](#tab/azure-portal)
-> [!TIP]
-> For this quickstart, we recommend using the resource group name ``msdocs-cosmos-javascript-quickstart-rg``.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. From the Azure portal menu or the **Home page**, select **Create a resource**.
-
-1. On the **New** page, search for and select **Azure Cosmos DB**.
-
-1. On the **Select API option** page, select the **Create** option within the **MongoDB** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the MongoDB API](/azure/cosmos-db/mongodb/mongodb-introduction).
-
- :::image type="content" source="media/quickstart-javascript/cosmos-api-choices.png" lightbox="media/quickstart-javascript/cosmos-api-choices.png" alt-text="Screenshot of select A P I option page for Azure Cosmos D B.":::
-
-1. On the **Create Azure Cosmos DB Account** page, enter the following information:
- | Setting | Value | Description |
- | | | |
- | Subscription | Subscription name | Select the Azure subscription that you wish to use for this Azure Cosmos account. |
- | Resource Group | Resource group name | Select a resource group, or select **Create new**, then enter a unique name for the new resource group. |
- | Account Name | A unique name | Enter a name to identify your Azure Cosmos account. The name will be used as part of a fully qualified domain name (FQDN) with a suffix of *documents.azure.com*, so the name must be globally unique. The name can only contain lowercase letters, numbers, and the hyphen (-) character. The name must also be between 3-44 characters in length. |
- | Location | The region closest to your users | Select a geographic location to host your Azure Cosmos DB account. Use the location that is closest to your users to give them the fastest access to the data. |
- | Capacity mode |Provisioned throughput or Serverless|Select **Provisioned throughput** to create an account in [provisioned throughput](../set-throughput.md) mode. Select **Serverless** to create an account in [serverless](../serverless.md) mode. |
- | Apply Azure Cosmos DB free tier discount | **Apply** or **Do not apply** |With Azure Cosmos DB free tier, you'll get the first 1000 RU/s and 25 GB of storage for free in an account. Learn more about [free tier](https://azure.microsoft.com/pricing/details/cosmos-db/). |
- | Version | MongoDB version | Select the MongoDB server version that matches your application requirements.
-
- > [!NOTE]
- > You can have up to one free tier Azure Cosmos DB account per Azure subscription and must opt-in when creating the account. If you do not see the option to apply the free tier discount, this means another account in the subscription has already been enabled with free tier.
-
- :::image type="content" source="media/quickstart-javascript/new-cosmos-account-page.png" lightbox="media/quickstart-javascript/new-cosmos-account-page.png" alt-text="Screenshot of new account page for Azure Cosmos D B SQL A P I.":::
+
-1. Select **Review + create**.
+### Get MongoDB connection string
-1. Review the settings you provide, and then select **Create**. It takes a few minutes to create the account. Wait for the portal page to display **Your deployment is complete** before moving on.
+#### [Azure CLI](#tab/azure-cli)
-1. Select **Go to resource** to go to the Azure Cosmos DB account page.
- :::image type="content" source="media/quickstart-javascript/cosmos-deployment-complete.png" lightbox="media/quickstart-javascript/cosmos-deployment-complete.png" alt-text="Screenshot of deployment page for Azure Cosmos D B SQL A P I resource.":::
+#### [PowerShell](#tab/azure-powershell)
-1. From the Azure Cosmos DB SQL API account page, select the **Connection String** navigation menu option.
-1. Record the values for the **PRIMARY CONNECTION STRING** field. You'll use this value in a later step.
+#### [Portal](#tab/azure-portal)
- :::image type="content" source="media/quickstart-javascript/cosmos-endpoint-key-credentials.png" lightbox="media/quickstart-javascript/cosmos-endpoint-key-credentials.png" alt-text="Screenshot of Keys page with various credentials for an Azure Cosmos D B SQL A P I account.":::
npm install mongodb dotenv
### Configure environment variables
-To use the **CONNECION STRING** values within your JavaScript code, set this value on the local machine running the application. To set the environment variable, use your preferred terminal to run the following commands:
-
-#### [Windows](#tab/windows)
-
-```powershell
-$env:COSMOS_CONNECTION_STRING = "<cosmos-connection-string>"
-```
-
-#### [Linux / macOS](#tab/linux+macos)
-
-```bash
-export COSMOS_CONNECTION_STRING="<cosmos-connection-string>"
-```
-
-#### [.env](#tab/dotenv)
-
-A `.env` file is a standard way to store environment variables in a project. Create a `.env` file in the root of your project. Add the following lines to the `.env` file:
-
-```dotenv
-COSMOS_CONNECTION_STRING="<cosmos-connection-string>"
-```
-- ## Object model
cosmos-db Partners Migration Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partners-migration-cosmosdb.md
From NoSQL migration to application development, you can choose from a variety o
|[Altoros Development LLC](https://www.altoros.com/) | IoT, Personalization Retail (inventory), Serverless architectures NoSQL migration, App development| USA | |[Avanade](https://www.avanade.com/) | IoT, Retail (inventory), Serverless Architecture, App development | Austria, Germany, Switzerland, Italy, Norway, Spain, UK, Canada | |[Accenture](https://www.accenture.com/) | IoT, Retail (inventory), Serverless Architecture, App development |Global|
-|[Capax Global LLC](https://www.capaxglobal.com/) | IoT, Personalization, Retail (inventory), Operational Analytics (Spark), Serverless architecture, App development| USA |
+|Capax Global LLC | IoT, Personalization, Retail (inventory), Operational Analytics (Spark), Serverless architecture, App development| USA |
| [Capgemini](https://www.capgemini.com/) | Retail (inventory), IoT, Operational Analytics (Spark), App development | USA, France, UK, Netherlands, Finland | | [Cognizant](https://www.cognizant.com/) | IoT, Personalization, Retail (inventory), Operational Analytics (Spark), App development |USA, Canada, UK, Denmark, Netherlands, Switzerland, Australia, Japan | |[Infosys](https://www.infosys.com/) | App development | USA |
cosmos-db Create Sql Api Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-spark.md
If you are using our older Spark 2.4 Connector, you can find out how to migrate
* Azure Cosmos DB Apache Spark 3 OLTP Connector for Core (SQL) API: [Release notes and resources](sql-api-sdk-java-spark-v3.md) * Learn more about [Apache Spark](https://spark.apache.org/).
+* Learn how to configure [throughput control](throughput-control-spark.md).
* Check out more [samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples).
cosmos-db Defender For Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/defender-for-cosmos-db.md
-# Microsoft Defender for Cosmos DB
+# Microsoft Defender for Azure Cosmos DB
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-Microsoft Defender for Cosmos DB provides an extra layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit Azure Cosmos DB accounts. This layer of protection allows you to address threats, even without being a security expert, and integrate them with central security monitoring systems.
+Microsoft Defender for Azure Cosmos DB provides an extra layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit Azure Cosmos DB accounts. This layer of protection allows you to address threats, even without being a security expert, and integrate them with central security monitoring systems.
Security alerts are triggered when anomalies in activity occur. These security alerts show up in [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/). Subscription administrators also get these alerts over email, with details of the suspicious activity and recommendations on how to investigate and remediate the threats. > [!NOTE] >
-> * Microsoft Defender for Cosmos DB is currently available only for the Core (SQL) API.
-> * Microsoft Defender for Cosmos DB is not currently available in Azure government and sovereign cloud regions.
+> * Microsoft Defender for Azure Cosmos DB is currently available only for the Core (SQL) API.
+> * Microsoft Defender for Azure Cosmos DB is not currently available in Azure government and sovereign cloud regions.
For a full investigation experience of the security alerts, we recommended enabling [diagnostic logging in Azure Cosmos DB](../monitor-cosmos-db.md), which logs operations on the database itself, including CRUD operations on all documents, containers, and databases. ## Threat types
-Microsoft Defender for Cosmos DB detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. It can currently trigger the following alerts:
+Microsoft Defender for Azure Cosmos DB detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. It can currently trigger the following alerts:
- **Potential SQL injection attacks**: Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks canΓÇÖt work in Azure Cosmos DB. However, there are some variations of SQL injections that can succeed and may result in exfiltrating data from your Azure Cosmos DB accounts. Defender for Azure Cosmos DB detects both successful and failed attempts, and helps you harden your environment to prevent these threats.
Microsoft Defender for Cosmos DB detects anomalous activities indicating unusual
- **Suspicious database activity**: For example, suspicious key-listing patterns that resemble known malicious lateral movement techniques and suspicious data extraction patterns.
-## Configure Microsoft Defender for Cosmos DB
+## Configure Microsoft Defender for Azure Cosmos DB
-You can configure Microsoft Defender protection in any of several ways, described in the following sections.
-
-# [Portal](#tab/azure-portal)
-
-1. Launch the Azure portal at [https://portal.azure.com](https://portal.azure.com/).
-
-2. From the Azure Cosmos DB account, from the **Settings** menu, select **Microsoft Defender for Cloud**.
-
- :::image type="content" source="./media/defender-for-cosmos-db/cosmos-db-atp.png" alt-text="Set up Azure Defender for Cosmos DB" border="true":::
-
-3. In the **Microsoft Defender for Cloud** configuration blade:
-
- * Change the option from **OFF** to **ON**.
- * Click **Save**.
-
-# [REST API](#tab/rest-api)
-
-Use REST API commands to create, update, or get the Azure Defender setting for a specific Azure Cosmos DB account.
-
-* [Advanced Threat Protection - Create](/rest/api/securitycenter/advancedthreatprotection/create)
-* [Advanced Threat Protection - Get](/rest/api/securitycenter/advancedthreatprotection/get)
-
-# [PowerShell](#tab/azure-powershell)
-
-Use the following PowerShell cmdlets:
-
-* [Enable Advanced Threat Protection](/powershell/module/az.security/enable-azsecurityadvancedthreatprotection)
-* [Get Advanced Threat Protection](/powershell/module/az.security/get-azsecurityadvancedthreatprotection)
-* [Disable Advanced Threat Protection](/powershell/module/az.security/disable-azsecurityadvancedthreatprotection)
-
-# [ARM template](#tab/arm-template)
-
-Use an Azure Resource Manager (ARM) template to set up Azure Cosmos DB with Azure Defender protection enabled. For more information, see
-[Create a Cosmos DB Account with Advanced Threat Protection](https://azure.microsoft.com/resources/templates/microsoft-defender-cosmosdb-create-account/).
-
-# [Azure Policy](#tab/azure-policy)
-
-Use an Azure Policy to enable Azure Defender for Cosmos DB.
-
-1. Launch the Azure **Policy - Definitions** page, and search for the **Deploy Advanced Threat Protection for Cosmos DB** policy.
-
- :::image type="content" source="./media/defender-for-cosmos-db/cosmos-db.png" alt-text="Search Policy":::
-
-1. Click on the **Deploy Advanced Threat Protection for CosmosDB** policy, and then click **Assign**.
-
- :::image type="content" source="./media/defender-for-cosmos-db/cosmos-db-atp-policy.png" alt-text="Select Subscription Or Group":::
-
-1. From the **Scope** field, click the three dots, select an Azure subscription or resource group, and then click **Select**.
-
- :::image type="content" source="./media/defender-for-cosmos-db/cosmos-db-atp-details.png" alt-text="Policy Definitions Page":::
-
-1. Enter the other parameters, and click **Assign**.
--
+See [Enable Microsoft Defender for Azure Cosmos DB](../../defender-for-cloud/defender-for-databases-enable-cosmos-protections.md).
## Manage security alerts
When Azure Cosmos DB activity anomalies occur, a security alert is triggered wit
## Next steps
-* Learn more about [Microsoft Defender for Cosmos DB](../../defender-for-cloud/concept-defender-for-cosmos.md)
+* Learn more about [Microsoft Defender for Azure Cosmos DB](../../defender-for-cloud/concept-defender-for-cosmos.md)
* Learn more about [Diagnostic logging in Azure Cosmos DB](../cosmosdb-monitor-resource-logs.md)
cosmos-db How To Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-create-account.md
Create a single Azure Cosmos DB account using the SQL API.
1. On the **New** page, search for and select **Azure Cosmos DB**.
-1. On the **Select API option** page, select the **Create** option within the **Core (SQL) - Recommend** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the SQL API](/azure/cosmos-db/sql/introduction.md).
+1. On the **Select API option** page, select the **Create** option within the **Core (SQL) - Recommend** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the SQL API](../index.yml).
:::image type="content" source="media/create-account-portal/cosmos-api-choices.png" lightbox="media/create-account-portal/cosmos-api-choices.png" alt-text="Screenshot of select A P I option page for Azure Cosmos D B.":::
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-get-started.md
The most common constructor for **CosmosClient** has two parameters:
--query "documentEndpoint" ```
-1. Find the *PRIMARY KEY* from the list of keys for the account with the[``az-cosmosdb-keys-list``](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
+1. Find the *PRIMARY KEY* from the list of keys for the account with the [`az-cosmosdb-keys-list`](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
```azurecli-interactive az cosmosdb keys list \
Another constructor for **CosmosClient** only contains a single parameter:
) ```
-1. Find the *PRIMARY CONNECTION STRING* from the list of connection strings for the account with the[``az-cosmosdb-keys-list``](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
+1. Find the *PRIMARY CONNECTION STRING* from the list of connection strings for the account with the [`az-cosmosdb-keys-list`](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
```azurecli-interactive az cosmosdb keys list \
cosmos-db Kafka Connector Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/kafka-connector-source.md
curl -H "Content-Type: application/json" -X POST -d @<path-to-JSON-config-file>
### Confirm data written to Kafka topic
-1. Open Kafka Topic UI on `<http://localhost:9000>`.
+1. Open Kafka Topic UI on `http://localhost:9000`.
1. Select the Kafka "apparels" topic you created. 1. Verify that the document you inserted into Azure Cosmos DB earlier appears in the Kafka topic.
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quickstart-dotnet.md
This quickstart will create a single Azure Cosmos DB account using the SQL API.
--query "documentEndpoint" ```
-1. Find the *PRIMARY KEY* from the list of keys for the account with the[``az-cosmosdb-keys-list``](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
+1. Find the *PRIMARY KEY* from the list of keys for the account with the [`az-cosmosdb-keys-list`](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
```azurecli-interactive az cosmosdb keys list \
This quickstart will create a single Azure Cosmos DB account using the SQL API.
1. On the **New** page, search for and select **Azure Cosmos DB**.
-1. On the **Select API option** page, select the **Create** option within the **Core (SQL) - Recommend** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the SQL API](/azure/cosmos-db/sql/introduction.md).
+1. On the **Select API option** page, select the **Create** option within the **Core (SQL) - Recommend** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the SQL API](../index.yml).
:::image type="content" source="media/create-account-portal/cosmos-api-choices.png" lightbox="media/create-account-portal/cosmos-api-choices.png" alt-text="Screenshot of select A P I option page for Azure Cosmos D B.":::
cosmos-db Throughput Control Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/throughput-control-spark.md
+
+ Title: Azure Cosmos DB Spark Connector - Throughput Control
+description: Learn about controlling throughput for bulk data movements in the Azure Cosmos DB Spark Connector
++++ Last updated : 06/22/2022++++
+# Azure Cosmos DB Spark Connector - throughput control
+
+The [Spark Connector](create-sql-api-spark.md) allows you to communicate with Azure Cosmos DB using [Apache Spark](https://spark.apache.org/). This article describes how the throughput control feature works. Check out our [Spark samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples) to get started using throughput control.
+
+## Why is throughput control important?
+
+ Having throughput control helps to isolate the performance needs of applications running against a container, by limiting the amount of [request units](../request-units.md) that can be consumed by a given Spark client.
+
+There are several advanced scenarios that benefit from client-side throughput control:
+
+- **Different operations and tasks have different priorities** - there can be a need to prevent normal transactions from being throttled due to data ingestion or copy activities. Some operations and/or tasks aren't sensitive to latency, and are more tolerant to being throttled than others.
+
+- **Provide fairness/isolation to different end users/tenants** - An application will usually have many end users. Some users may send too many requests, which consume all available throughput, causing others to get throttled.
+
+- **Load balancing of throughput between different Azure Cosmos DB clients** - in some use cases, it's important to make sure all the clients get a fair (equal) share of the throughput
++
+Throughput control enables the capability for more granular level RU rate limiting as needed.
+
+## How does throughput control work?
+
+Throughput control for the Spark Connector is configured by first creating a container that will define throughput control metadata, with a partition key of `groupId`, and `ttl` enabled. Here we create this container using Spark SQL, and call it `ThroughputControl`:
++
+```sql
+ %sql
+ CREATE TABLE IF NOT EXISTS cosmosCatalog.`database-v4`.ThroughputControl
+ USING cosmos.oltp
+ OPTIONS(spark.cosmos.database = 'database-v4')
+ TBLPROPERTIES(partitionKeyPath = '/groupId', autoScaleMaxThroughput = '4000', indexingPolicy = 'AllProperties', defaultTtlInSeconds = '-1');
+```
+
+> [!NOTE]
+> The above example creates a container with [autoscale](../provision-throughput-autoscale.md). If you prefer standard provisioning, you can replace `autoScaleMaxThroughput` with `manualThroughput` instead.
+
+> [!IMPORTANT]
+> The partition key must be defined as `/groupId`, and `ttl` must be enabled, for the throughput control feature to work.
+
+Within the Spark config of a given application, we can then specify parameters for our workload. The below example sets throughput control as `enabled`, as well as defining a throughput control group `name` and a `targetThroughputThreshold`. We also define the `database` and `container` in which through control group is maintained:
+
+```scala
+ "spark.cosmos.throughputControl.enabled" -> "true",
+ "spark.cosmos.throughputControl.name" -> "SourceContainerThroughputControl",
+ "spark.cosmos.throughputControl.targetThroughputThreshold" -> "0.95",
+ "spark.cosmos.throughputControl.globalControl.database" -> "database-v4",
+ "spark.cosmos.throughputControl.globalControl.container" -> "ThroughputControl"
+```
+
+In the above example, the `targetThroughputThreshold` is defined as **0.95**, so rate limiting will occur (and requests will be retried) when clients consume more than 95% (+/- 5-10 percent) of the throughput that is allocated to the container. This configuration is stored as a document in the throughput container that looks like the below:
+
+```json
+ {
+ "id": "ZGF0YWJhc2UtdjQvY3VzdG9tZXIvU291cmNlQ29udGFpbmVyVGhyb3VnaHB1dENvbnRyb2w.info",
+ "groupId": "database-v4/customer/SourceContainerThroughputControl.config",
+ "targetThroughput": "",
+ "targetThroughputThreshold": "0.95",
+ "isDefault": true,
+ "_rid": "EHcYAPolTiABAAAAAAAAAA==",
+ "_self": "dbs/EHcYAA==/colls/EHcYAPolTiA=/docs/EHcYAPolTiABAAAAAAAAAA==/",
+ "_etag": "\"2101ea83-0000-1100-0000-627503dd0000\"",
+ "_attachments": "attachments/",
+ "_ts": 1651835869
+ }
+```
+> [!NOTE]
+> Throughput control does not do RU pre-calculation of each operation. Instead, it tracks the RU usages after the operation based on the response header. As such, throughput control is based on an approximation - and does not guarantee that amount of throughput will be available for the group at any given time.
+
+> [!WARNING]
+> The `targetThroughputThreshold` is **immutable**. If you change the target throughput threshold value, this will create a new throughput control group (but as long as you use Version 4.10.0 or later it can have the same name). You need to restart all Spark jobs that are using the group if you want to ensure they all consume the new threshold immediately (otherwise they will pick-up the new threshold after the next restart).
+
+For each Spark client that uses the throughput control group, a record will be created in the `ThroughputControl` container - with a ttl of a few seconds - so the documents will vanish pretty quickly if a Spark client isn't actively running anymore - which looks like the below:
+
+```json
+ {
+ "id": "Zhjdieidjojdook3osk3okso3ksp3ospojsp92939j3299p3oj93pjp93jsps939pkp9ks39kp9339skp",
+ "groupId": "database-v4/customer/SourceContainerThroughputControl.config",
+ "_etag": "\"1782728-w98999w-ww9998w9-99990000\"",
+ "ttl": 10,
+ "initializeTime": "2022-06-26T02:24:40.054Z",
+ "loadFactor": 0.97636377638898,
+ "allocatedThroughput": 484.89444487847,
+ "_rid": "EHcYAPolTiABAAAAAAAAAA==",
+ "_self": "dbs/EHcYAA==/colls/EHcYAPolTiA=/docs/EHcYAPolTiABAAAAAAAAAA==/",
+ "_etag": "\"2101ea83-0000-1100-0000-627503dd0000\"",
+ "_attachments": "attachments/",
+ "_ts": 1651835869
+ }
+```
+
+In each client record, the `loadFactor` attribute represents the load on the given client, relative to other clients in the throughput control group. The `allocatedThroughput` attribute shows how many RUs are currently allocated to this client. The Spark Connector will adjust allocated throughput for each client based on its load. This will ensure that each client gets a share of the throughput available that is proportional to its load, and all clients together don't consume more than the total allocated for the throughput control group to which they belong.
++
+## Next steps
+
+* [Spark samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples).
+* [Manage data with Azure Cosmos DB Spark 3 OLTP Connector for SQL API](create-sql-api-spark.md).
+* Learn more about [Apache Spark](https://spark.apache.org/).
cost-management-billing Enable Preview Features Cost Management Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-preview-features-cost-management-labs.md
+
+ Title: Enable preview features in Cost Management Labs
+
+description: This article explains how to explore preview features and provides a list of the recent previews you might be interested in.
++ Last updated : 06/23/2022++++++
+# Enable preview features in Cost Management Labs
+
+Cost Management Labs is an experience in the Azure portal where you can get a sneak peek at what's coming in Cost Management. You can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences.
+
+This article explains how to explore preview features and provides a list of the recent previews you might be interested in.
+
+## Explore preview features
+
+You can explore preview features from the Cost Management overview.
+
+1. On the Cost Management overview page, select the [Try preview](https://aka.ms/costmgmt/trypreview) command at the top of the page.
+2. From there, enable the features you'd like to use and select **Close** at the bottom of the page.
+ :::image type="content" source="./media/enable-preview-features-cost-management-labs/cost-management-labs.png" alt-text="Screenshot showing the Cost Management labs preview options." lightbox="./media/enable-preview-features-cost-management-labs/cost-management-labs.png" :::
+3. To see the features enabled, close and reopen Cost Management. You can reopen Cost Management by selecting the link in the notification in the top-right corner.
+ :::image type="content" source="./media/enable-preview-features-cost-management-labs/reopen-cost-management.png" alt-text="Screenshot showing the Reopen Cost Management notification." :::
+
+If you're interested in getting preview features even earlier:
+
+1. Navigate to Cost Management.
+2. Select **Go to preview portal**.
+
+Or, you can go directly to the [Azure preview portal](https://preview.portal.azure.com/).
+
+It's the same experience as the public portal, except with new improvements and preview features. Every change in Cost Management is available in the preview portal a week before it's in the full Azure portal.
+
+We encourage you to try out the preview features available in Cost Management Labs and share your feedback. It's your chance to influence the future direction of Cost Management. To provide feedback, use the **Report a bug** link in the Try preview menu. It's a direct way to communicate with the Cost Management engineering team.
+
+## Anomaly detection alerts
+
+<a name="anomalyalerts"></a>
+
+Get notified by email when a cost anomaly is detected on your subscription.
+
+Anomaly detection is available for Azure global subscriptions in the cost analysis preview.
+
+Here's an example of a cost anomaly shown in cost analysis:
+++
+To configure anomaly alerts:
+
+1. Open the cost analysis preview.
+1. Navigate to **Cost alerts** and select **Add** > **Add Anomaly alert**.
++
+For more information about anomaly detection and how to configure alerts, see [Identify anomalies and unexpected changes in cost](../understand/analyze-unexpected-charges.md).
+
+**Anomaly detection is now available by default in Azure global.**
+
+## Grouping SQL databases and elastic pools
+
+<a name="aksnestedtable"></a>
+
+Get an at-a-glance view of your total SQL costs by grouping SQL databases and elastic pools. They're shown under their parent server in the cost analysis preview. This feature is enabled by default.
+
+Understanding what you're being charged for can be complicated. The best place to start for many people is the [Resources view](https://aka.ms/costanalysis/resources) in the cost analysis preview. It shows resources that are incurring cost. But even a straightforward list of resources can be hard to follow when a single deployment includes multiple, related resources. To help summarize your resource costs, we're trying to group related resources together. So, we're changing cost analysis to show child resources.
+
+Many Azure services use nested or child resources. SQL servers have databases, storage accounts have containers, and virtual networks have subnets. Most of the child resources are only used to configure services, but sometimes the resources have their own usage and charges. SQL databases are perhaps the most common example.
+
+SQL databases are deployed as part of a SQL server instance, but usage is tracked at the database level. Additionally, you might also have charges on the parent server, like for Microsoft Defender for Cloud. To get the total cost for your SQL deployment in classic cost analysis, you need to find the server and each database and then manually sum up their total cost. As an example, you can see the **aepool** elastic pool at the top of the list below and the **treyanalyticsengine** server lower down on the first page. What you don't see is another database even lower in the list. You can imagine how troubling this situation would be when you need the total cost of a large server instance with many databases.
+
+Here's an example showing classic cost analysis where multiple related resource costs aren't grouped.
++
+In the cost analysis preview, the child resources are grouped together under their parent resource. The grouping shows a quick, at-a-glance view of your deployment and its total cost. Using the same subscription, you can now see all three charges grouped together under the server, offering a one-line summary for your total server costs.
+
+Here's an example showing grouped resource costs with the **Grouping SQL databases and elastic pools** preview option enabled.
++
+You might also notice the change in row count. Classic cost analysis shows 53 rows where every resource is broken out on its own. The cost analysis preview only shows 25 rows. The difference is that the individual resources are being grouped together, making it easier to get an at-a-glance cost summary.
+
+In addition to SQL servers, you'll also see other services with child resources, like App Service, Synapse, and VNet gateways. Each is similarly shown grouped together in the cost analysis preview.
+
+**Grouping SQL databases and elastic pools is available by default in the cost analysis preview.**
+
+## Average in the cost analysis preview
+
+<a name="cav3average"></a>
+
+Average in the cost analysis preview shows your average daily or monthly cost at the top of the view.
++
+When the selected date range includes the current day, the average cost is calculated ending at yesterday's date. It doesn't include partial cost from the current day because data for the day isn't complete. Every service submits usage at different timelines that affects the average calculation. For more information about data latency and refresh processing, see [Understand Cost Management data](understand-cost-mgt-data.md).
+
+**Average in the cost analysis preview is available by default in the cost analysis preview.**
+
+## Budgets in the cost analysis preview
+
+<a name="budgetsfeature"></a>
+
+Budgets in the cost analysis preview help you quickly create and edit budgets directly from the cost analysis preview.
++
+If you don't have a budget yet, you'll see a link to create a new budget. Budgets created from the cost analysis preview are preconfigured with alerts. Thresholds are set for cost exceeding 50 percent, 80 percent, and 95 percent of your cost. Or, 100 percent of your forecast for the month. You can add other recipients or update alerts from the Budgets page.
+
+**Budgets in the cost analysis preview is available by default in the cost analysis preview.**
+
+## Charts in the cost analysis preview
+
+<a name="chartsfeature"></a>
+
+Charts in the cost analysis preview include a chart of daily or monthly charges for the specified date range.
++
+Charts are enabled on the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Use the **How would you rate the cost analysis preview?** Option at the bottom of the page to share feedback about the preview.
+
+## Streamlined menu
+
+<a name="onlyinconfig"></a>
+
+Cost Management includes a central management screen for all configuration settings. Some of the settings are also available directly from the Cost Management menu currently. Enabling the **Streamlined menu** option removes configuration settings from the menu.
+
+In the following image, the menu on the left is classic cost analysis. The menu on the right is the streamlined menu.
++
+You can enable **Streamlined menu** on the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Feel free to [share your feedback](https://feedback.azure.com/d365community/idea/5e0ea52c-1025-ec11-b6e6-000d3a4f07b8). As an experimental feature, we need your feedback to determine whether to release or remove the preview.
+
+## Open config items in the menu
+
+<a name="configinmenu"></a>
+
+Cost Management includes a central management view for all configuration settings. Currently, selecting a setting opens the configuration page outside of the Cost Management menu.
++
+**Open config items in the menu** is an experimental option to open the configuration page in the Cost Management menu. The option makes it easier to switch to other menu items with one selection. The feature works best with the [streamlined menu](#streamlined-menu).
+
+You can enable **Open config items in the menu** on the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal.
+
+[Share your feedback](https://feedback.azure.com/d365community/idea/1403a826-1025-ec11-b6e6-000d3a4f07b8) about the feature. As an experimental feature, we need your feedback to determine whether to release or remove the preview.
+
+## Change scope from menu
+
+<a name="changescope"></a>
+
+If you manage many subscriptions and need to switch between subscriptions or resource groups often, you might want to include the **Change scope from menu** option.
++
+It allows changing the scope from the menu for quicker navigation. To enable the feature, navigate to the [Cost Management Labs preview page](https://portal.azure.com/#view/Microsoft_Azure_CostManagement/Menu/~/overview/open/overview.preview) in the Azure portal.
+
+[Share your feedback](https://feedback.azure.com/d365community/idea/e702a826-1025-ec11-b6e6-000d3a4f07b8) about the feature. As an experimental feature, we need your feedback to determine whether to release or remove the preview.
+
+## How to share feedback
+
+We're always listening and making constant improvements based on your feedback, so we welcome it. Here are a few ways to share your feedback with the team:
+
+- If you have a problem or are seeing data that doesn't make sense, submit a support request. It's the fastest way to investigate and resolve data issues and major bugs.
+- For feature requests, you can share ideas and vote up others in the [Cost Management feedback forum](https://aka.ms/costmgmt/feedback).
+- Take advantage of the **How would you rate…** prompts in the Azure portal to let us know how each experience is working for you. We monitor the feedback proactively to identify and prioritize changes. You'll see either a blue option in the bottom-right corner of the page or a banner at the top.
+
+## Next steps
+
+Learn about [what's new in Cost Management](https://azure.microsoft.com/blog/tag/cost-management/).
cost-management-billing Open Banking Strong Customer Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/open-banking-strong-customer-authentication.md
As of September 14, 2019, banks in the 31 countries/regions of the [European Eco
## What PSD2 means for Azure customers
-If you pay for Azure with a credit card issued by a bank in the[European Economic Area](https://en.wikipedia.org/wiki/European_Economic_Area), you might be required to complete multi-factor authentication for the payment method of your account. You may be prompted to complete the multi-factor authentication challenge when signing up your Azure account or upgrading your Azure accountΓÇöeven if you are not making a purchase at the time. You may also be asked to provide multi-factor authentication when you change the payment method of your Azure account, remove your spending cap, or make an immediate payment from the Azure portalΓÇö such as settling outstanding balances or purchasing Azure credits.
+If you pay for Azure with a credit card issued by a bank in the [European Economic Area](https://en.wikipedia.org/wiki/European_Economic_Area), you might be required to complete multi-factor authentication for the payment method of your account. You may be prompted to complete the multi-factor authentication challenge when signing up your Azure account or upgrading your Azure accountΓÇöeven if you are not making a purchase at the time. You may also be asked to provide multi-factor authentication when you change the payment method of your Azure account, remove your spending cap, or make an immediate payment from the Azure portalΓÇö such as settling outstanding balances or purchasing Azure credits.
If your bank rejects your monthly Azure charges, you'll get a past due email from Azure with instructions to fix it. You can complete the multi-factor authentication challenge and settle your outstanding charges in the Azure portal.
cost-management-billing Determine Reservation Purchase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/determine-reservation-purchase.md
Previously updated : 09/20/2021 Last updated : 06/23/2022
Use the following sections to help analyze your daily usage data to determine yo
### Analyze usage for a VM reserved instance purchase
-Identify the right VM size for your purchase. For example, a reservation purchased for ES series VMs don't apply to E series VMs, and vice-versa.
+Identify the right VM size for your purchase. For example, a reservation purchased for ES series VMs doesn't apply to E series VMs, and vice-versa.
Promo series VMs don't get a reservation discount, so remove them from your analysis.
If you want to analyze at the instance size family level, you can get the instan
Reserved capacity applies to Azure Synapse Analytics DWU pricing. It doesn't apply to Azure Synapse Analytics license cost or any costs other than compute.
-To narrow eligible usage, apply follow filters on your usage data:
-
+To narrow eligible usage, apply the following filters to your usage data:
- Filter **MeterCategory** for **SQL Database**. - Filter **MeterName** for **vCore**.
The data informs you about the consistent usage for:
### Analysis for Azure Synapse Analytics
-Reserved capacity applies to Azure Synapse Analytics DWU usage and is purchased in increments on 100 DWU. To narrow eligible usage, apply the follow filters on your usage data:
+Reserved capacity applies to Azure Synapse Analytics DWU usage and is purchased in increments on 100 DWU. To narrow eligible usage, apply the following filters on your usage data:
- Filter **MeterName** for **100 DWUs**. - Filter **Meter Sub-Category** for **Compute Optimized Gen2**.
Learn more about [recommendations](reserved-instance-purchase-recommendations.md
## Recommendations in the Cost Management Power BI app
-Enterprise Agreement customers can use the VM RI Coverage reports for VMs and purchase recommendations. The coverage reports show you total usage and the usage that's covered by reserved instances.
+Enterprise Agreement customers can use the VM RI Coverage reports for VMs and purchase recommendations. The coverage reports show total usage and the usage that's covered by reserved instances.
1. Get the [Cost Management App](https://appsource.microsoft.com/product/power-bi/costmanagement.azurecostmanagementapp). 2. Go to the VM RI Coverage report ΓÇô Shared or Single scope, depending on which scope you want to purchase at.
Enterprise Agreement customers can use the VM RI Coverage reports for VMs and pu
Reservation purchase recommendations are available in [Azure Advisor](https://portal.azure.com/#blade/Microsoft_Azure_Expert/AdvisorMenuBlade/overview). - Advisor has only single-subscription scope recommendations.-- Advisor recommendations are calculated using 30-day look-back period. The projected savings are for a 3-year reservation term.
+- Advisor recommendations are calculated using 30-day look-back period. The projected savings are for a three-year reservation term.
- If you purchase a shared-scope reservation, Advisor reservation purchase recommendations can take up to 30 days to disappear. ## Recommendations using APIs
data-factory Author Global Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/author-global-parameters.md
description: Set global parameters for each of your Azure Data Factory environme
--++ Last updated 01/31/2022
Global parameters can be used in any [pipeline expression](control-flow-expressi
## <a name="cicd"></a> Global parameters in CI/CD
-There are two ways to integrate global parameters in your continuous integration and deployment solution:
+We recommend including global parameters in the ARM template during the CI/CD. The new mechanism of including global parameters in the ARM template (from 'Manage hub' -> 'ARM template' -> ΓÇÿInclude global parameters in ARM template
+') as illustrated below, will not conflict/ override the factory-level settings as it used to do earlier, hence not requiring additional PowerShell for global parameters deployment during CI/CD.
-* Include global parameters in the ARM template
-* Deploy global parameters via a PowerShell script
+
+> [!NOTE]
+> We have moved the UI experience for including global parameters from the 'Global parameters' section to the 'ARM template' section in the manage hub.
+If you are already using the older mechanism (from 'Manage hub' -> 'Global parameters' -> 'Include in ARM template'), you can continue. We will continue to support it.
+
+If you are using the older flow of integrating global parameters in your continuous integration and deployment solution, it will continue to work:
-For general use cases, it is recommended to include global parameters in the ARM template. This integrates natively with the solution outlined in [the CI/CD doc](continuous-integration-delivery.md). In case of automatic publishing and Microsoft Purview connection, **PowerShell script** method is required. You can find more about PowerShell script method later. Global parameters will be added as an ARM template parameter by default as they often change from environment to environment. You can enable the inclusion of global parameters in the ARM template from the **Manage** hub.
+* Include global parameters in the ARM template (from 'Manage hub' -> 'Global parameters' -> 'Include in ARM template')
+* Deploy global parameters via a PowerShell script
+
+We strongly recommend using the new mechanism of including global parameters in the ARM template (from 'Manage hub' -> 'ARM template' -> 'Include global parameters in an ARM template') since it makes the CICD with global parameters much more straightforward and easier to manage.
> [!NOTE]
-> The **Include in ARM template** configuration is only available in "Git mode". Currently it is disabled in "live mode" or "Data Factory" mode. In case of automatic publishing or Microsoft Purview connection, do not use Include global parameters method; use PowerShell script method.
+> The **Include global parameters in an ARM template** configuration is only available in "Git mode". Currently it is disabled in "live mode" or "Data Factory" mode.
> [!WARNING] >You cannot use ΓÇÿ-ΓÇÿ in the parameter name. You will receive an errorcode "{"code":"BadRequest","message":"ErrorCode=InvalidTemplate,ErrorMessage=The expression >'pipeline().globalParameters.myparam-dbtest-url' is not valid: .....}". But, you can use the ΓÇÿ_ΓÇÖ in the parameter name.
-Adding global parameters to the ARM template adds a factory-level setting that will override other factory-level settings such as a customer-managed key or git configuration in other environments. If you have these settings enabled in an elevated environment such as UAT or PROD, it's better to deploy global parameters via a PowerShell script in the steps highlighted below.
-### Deploying using PowerShell
+### Deploying using PowerShell (older mechanism)
+
+> [!NOTE]
+> This is not required if you're including global parameters using the 'Manage hub' -> 'ARM template' -> 'Include global parameters in an ARM template' since you can deploy the ARM with the ARM templates without breaking the Factory-level configurations. For backward compatability we will continue to support it.
The following steps outline how to deploy global parameters via PowerShell. This is useful when your target factory has a factory-level setting such as customer-managed key.
foreach ($gp in $globalParametersObject.GetEnumerator()) {
Write-Host "Adding global parameter:" $gp.Key $globalParameterValue = $gp.Value.ToObject([Microsoft.Azure.Management.DataFactory.Models.GlobalParameterSpecification]) $newGlobalParameters.Add($gp.Key, $globalParameterValue)
-}
+}
$dataFactory = Get-AzDataFactoryV2 -ResourceGroupName $resourceGroupName -Name $dataFactoryName $dataFactory.GlobalParameters = $newGlobalParameters
data-factory Concepts Integration Runtime Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-integration-runtime-performance.md
By default, every data flow activity spins up a new Spark cluster based upon the
However, if most of your data flows execute in parallel, it is not recommended that you enable TTL for the IR that you use for those activities. Only one job can run on a single cluster at a time. If there is an available cluster, but two data flows start, only one will use the live cluster. The second job will spin up its own isolated cluster. > [!NOTE]
-> Time to live is not available when using the auto-resolve integration runtime
+> Time to live is not available when using the auto-resolve integration runtime (default).
## Next steps
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Title: Managing Azure Data Factory Studio preview updates
-description: Learn how to enable/disable Azure Data Factory studio preview updates.
+description: Learn more about the Azure Data Factory studio preview experience.
Previously updated : 06/21/2022 Last updated : 06/23/2022 # Manage Azure Data Factory studio preview experience
There are two ways to enable preview experiences.
## Current Preview Updates
-### Dataflow Data first experimental view
+ [**Dataflow data first experimental view**](#dataflow-data-first-experimental-view)
+ * [Configuration panel](#configuration-panel)
+ * [Transformation settings](#transformation-settings)
+ * [Data preview](#data-preview)
+
+ [**Pipeline experimental view**](#pipeline-experimental-view)
+ * [Adding activities](#adding-activities)
+ * [ForEach activity container](#foreach-activity-container)
+
+### Dataflow data first experimental view
UI (user interfaces) changes have been made to mapping data flows. These changes were made to simplify and streamline the dataflow creation process so that you can focus on what your data looks like. The dataflow authoring experience remains the same as detailed [here](https://aka.ms/adfdataflows), except for certain areas detailed below.
Now, for each transformation, the configuration panel will only have **Data Prev
:::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-6.png" alt-text="Screenshot of the configuration panel with only a Data preview tab."::: If no transformation is selected, the panel will show the pre-existing data flow configurations: **Parameters** and **Settings**.
-
-
+ #### Transformation settings Settings specific to a transformation will now show in a pop up instead of the configuration panel. With each new transformation, a corresponding pop-up will automatically appear.
If debug mode is on, **Data Preview** in the configuration panel will give you a
Columns can be rearranged by dragging a column by its header. You can also sort columns using the arrows next to the column titles and you can export data preview data using **Export to CSV** on the banner above column headers. :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-9.png" alt-text="Screenshot of Data preview with Export button in the top right corner of the banner and Elapsed Time highlighted in the bottom left corner of the screen.":::
-
+ ### Pipeline experimental view UI (user interface) changes have been made to activities in the pipeline editor canvas. These changes were made to simplify and streamline the pipeline creation process. + #### Adding activities You now have the option to add an activity using the add button in the bottom right corner of an activity in the pipeline editor canvas. Clicking the button will open a drop-down list of all activities that you can add.
You now have the option to add an activity using the add button in the bottom ri
Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas and automatically linked with the previous activity on success. :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-10.png" alt-text="Screenshot of new pipeline activity adding experience with a drop down list to select activities.":::
-
+ #### ForEach activity container You can now view the activities contained in your ForEach activity.
You can now view the activities contained in your ForEach activity.
:::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-11.png" alt-text="Screenshot of new ForEach activity container."::: You have two options to add activities to your ForEach loop.+ 1. Use the + button in your ForEach container to add an activity. :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-12.png" alt-text="Screenshot of new ForEach activity container with the add button highlighted on the left side of the center of the screen.":::
data-factory Data Factory Azure Blob Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-blob-connector.md
Whether you use the tools or APIs, you perform the following steps to create a p
2. Create **datasets** to represent input and output data for the copy operation. In the example mentioned in the last step, you create a dataset to specify the blob container and folder that contains the input data. And, you create another dataset to specify the SQL table in Azure SQL Database that holds the data copied from the blob storage. For dataset properties that are specific to Azure Blob Storage, see [dataset properties](#dataset-properties) section. 3. Create a **pipeline** with a copy activity that takes a dataset as an input and a dataset as an output. In the example mentioned earlier, you use BlobSource as a source and SqlSink as a sink for the copy activity. Similarly, if you are copying from Azure SQL Database to Azure Blob Storage, you use SqlSource and BlobSink in the copy activity. For copy activity properties that are specific to Azure Blob Storage, see [copy activity properties](#copy-activity-properties) section. For details on how to use a data store as a source or a sink, click the link in the previous section for your data store.
-When you use the wizard, JSON definitions for these Data Factory entities (linked services, datasets, and the pipeline) are automatically created for you. When you use tools/APIs (except .NET API), you define these Data Factory entities by using the JSON format. For samples with JSON definitions for Data Factory entities that are used to copy data to/from an Azure Blob Storage, see [JSON examples](#json-examples-for-copying-data-to-and-from-blob-storage ) section of this article.
+When you use the wizard, JSON definitions for these Data Factory entities (linked services, datasets, and the pipeline) are automatically created for you. When you use tools/APIs (except .NET API), you define these Data Factory entities by using the JSON format. For samples with JSON definitions for Data Factory entities that are used to copy data to/from an Azure Blob Storage, see [JSON examples](#json-examples-for-copying-data-to-and-from-blob-storage) section of this article.
The following sections provide details about JSON properties that are used to define Data Factory entities specific to Azure Blob Storage.
data-factory Data Factory Data Movement Security Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-data-movement-security-considerations.md
The following cloud data stores require approving of IP address of the gateway m
**Answer:** We do not support this feature yet. We are actively working on it. **Question:** What are the port requirements for the gateway to work?
-**Answer:** Gateway makes HTTP-based connections to open internet. The **outbound ports 443 and 80** must be opened for gateway to make this connection. Open **Inbound Port 8050** only at the machine level (not at corporate firewall level) for Credential Manager application. If Azure SQL Database or Azure Synapse Analytics is used as source/ destination, then you need to open **1433** port as well. For more information, see [Firewall configurations and filtering IP addresses](#firewall-configurations-and-filtering-ip-address-of gateway) section.
+**Answer:** Gateway makes HTTP-based connections to open internet. The **outbound ports 443 and 80** must be opened for gateway to make this connection. Open **Inbound Port 8050** only at the machine level (not at corporate firewall level) for Credential Manager application. If Azure SQL Database or Azure Synapse Analytics is used as source/ destination, then you need to open **1433** port as well. For more information, see [Firewall configurations and filtering IP addresses](#firewall-configurations-and-filtering-ip-address-of-gateway) section.
**Question:** What are certificate requirements for Gateway? **Answer:** Current gateway requires a certificate that is used by the credential manager application for securely setting data store credentials. This certificate is a self-signed certificate created and configured by the gateway setup. You can use your own TLS/SSL certificate instead. For more information, see [click-once credential manager application](#click-once-credentials-manager-app) section.
databox-online Azure Stack Edge Mini R System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-system-requirements.md
To understand and refine the performance of your solution, you could use:
- `dkr image [prune]` to clean up unused images and free up space. - `dkr ps --size` to view the approximate size of a running container.
- For more information on the available commands, go to [ Debug Kubernetes issues](azure-stack-edge-gpu-connect-powershell-interface.md#debug-kubernetes-issues-related-to-iot-edge).
+ For more information on the available commands, go to [Debug Kubernetes issues](azure-stack-edge-gpu-connect-powershell-interface.md#debug-kubernetes-issues-related-to-iot-edge).
Finally, make sure that you validate your solution on your dataset and quantify the performance on Azure Stack Edge Mini R before deploying in production.
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
The description has also been updated to better explain the purpose of this hard
| Recommendation | Description | Severity | |--|--|:--:|
-| **Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources** | By default, a virtual machineΓÇÖs OS and data disks are encrypted-at-rest using platform-managed keys; temp disks and data caches arenΓÇÖt encrypted, and data isnΓÇÖt encrypted when flowing between compute and storage resources. For a comparison of different disk encryption technologies in Azure, see <https://aka.ms/diskencryptioncomparison>.<br>Use Azure Disk Encryption to encrypt all this data. Disregard this recommendation if: (1) youΓÇÖre using the encryption-at-host feature, or (2) server-side encryption on Managed Disks meets your security requirements. Learn more in Server-side encryption of Azure Disk Storage. | High |
+| **Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources** | By default, a virtual machineΓÇÖs OS and data disks are encrypted-at-rest using platform-managed keys; temp disks and data caches arenΓÇÖt encrypted, and data isnΓÇÖt encrypted when flowing between compute and storage resources. For more information, see the [comparison of different disk encryption technologies in Azure](https://aka.ms/diskencryptioncomparison).<br>Use Azure Disk Encryption to encrypt all this data. Disregard this recommendation if: (1) youΓÇÖre using the encryption-at-host feature, or (2) server-side encryption on Managed Disks meets your security requirements. Learn more in Server-side encryption of Azure Disk Storage. | High |
### Continuous export of secure score and regulatory compliance data released for general availability (GA)
To access this information, you can use any of the methods in the table below.
| Tool | Details | |-||
-| REST API call | GET <https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/providers/Microsoft.Security/assessments?api-version=2019-01-01-preview&$expand=statusEvaluationDates> |
+| REST API call | `GET https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/providers/Microsoft.Security/assessments?api-version=2019-01-01-preview&$expand=statusEvaluationDates` |
| Azure Resource Graph | `securityresources`<br>`where type == "microsoft.security/assessments"` | | Continuous export | The two dedicated fields will be available the Log Analytics workspace data | | [CSV export](continuous-export.md#manual-one-time-export-of-alerts-and-recommendations) | The two fields are included in the CSV files |
To ensure that Kubernetes workloads are secure by default, Security Center is ad
The early phase of this project includes a private preview and the addition of new (disabled by default) policies to the ASC_default initiative.
-You can safely ignore these policies and there will be no impact on your environment. If you'd like to enable them, sign up for the preview at <https://aka.ms/SecurityPrP> and select from the following options:
+You can safely ignore these policies and there will be no impact on your environment. If you'd like to enable them, sign up for the preview via the [Microsoft Cloud Security
+Private Community](https://aka.ms/SecurityPrP) and select from the following options:
1. **Single Preview** ΓÇô To join only this private preview. Explicitly mention "ASC Continuous Scan" as the preview you would like to join. 1. **Ongoing Program** ΓÇô To be added to this and future private previews. You'll need to complete a profile and privacy agreement.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 06/20/2022 Last updated : 06/23/2022 # What's new in Microsoft Defender for Cloud?
Learn how to [enable protections](enable-enhanced-security.md) for your database
### Auto-provisioning of Microsoft Defender for Endpoint unified solution
-Until now, the integration with Microsoft Defender for Endpoint (MDE) included automatic installation of the new [MDE unified solution](/microsoft-365/security/defender-endpoint/configure-server-endpoints?view=o365-worldwide#new-windows-server-2012-r2-and-2016-functionality-in-the-modern-unified-solution) for machines (Azure subscriptions and multicloud connectors) with Defender for Servers Plan 1 enabled, and for multicloud connectors with Defender for Servers Plan 2 enabled. Plan 2 for Azure subscriptions enabled the unified solution for Linux machines and Windows 2019 and 2022 servers only. Windows servers 2012R2 and 2016 used the MDE legacy solution dependent on Log Analytics agent.
+Until now, the integration with Microsoft Defender for Endpoint (MDE) included automatic installation of the new [MDE unified solution](/microsoft-365/security/defender-endpoint/configure-server-endpoints?view=o365-worldwide#new-windows-server-2012-r2-and-2016-functionality-in-the-modern-unified-solution&preserve-view=true) for machines (Azure subscriptions and multicloud connectors) with Defender for Servers Plan 1 enabled, and for multicloud connectors with Defender for Servers Plan 2 enabled. Plan 2 for Azure subscriptions enabled the unified solution for Linux machines and Windows 2019 and 2022 servers only. Windows servers 2012R2 and 2016 used the MDE legacy solution dependent on Log Analytics agent.
Now, the new unified solution is available for all machines in both plans, for both Azure subscriptions and multi-cloud connectors. For Azure subscriptions with Servers plan 2 that enabled MDE integration *after* 06-20-2022, the unified solution is enabled by default for all machines Azure subscriptions with the Defender for Servers Plan 2 enabled with MDE integration *before* 06-20-2022 can now enable unified solution installation for Windows servers 2012R2 and 2016 through the dedicated button in the Integrations page: :::image type="content" source="media/integration-defender-for-endpoint/enable-unified-solution.png" alt-text="The integration between Microsoft Defender for Cloud and Microsoft's EDR solution, Microsoft Defender for Endpoint, is enabled." lightbox="media/integration-defender-for-endpoint/enable-unified-solution.png":::
-Learn more about [MDE integration with Defender for Servers.](integration-defender-for-endpoint.md#users-with-defender-for-servers-enabled-and-microsoft-defender-for-endpoint-deployed).
+Learn more about [MDE integration with Defender for Servers](integration-defender-for-endpoint.md#users-with-defender-for-servers-enabled-and-microsoft-defender-for-endpoint-deployed).
### Deprecating the "API App should only be accessible over HTTPS" policy The policy `API App should only be accessible over HTTPS` has been deprecated. This policy is replaced with the `Web Application should only be accessible over HTTPS` policy, which has been renamed to `App Service apps should only be accessible over HTTPS`.
-To learn more about policy definitions for Azure App Service, see [Azure Policy built-in definitions for Azure App Service](../azure-app-configuration/policy-reference.md)
+To learn more about policy definitions for Azure App Service, see [Azure Policy built-in definitions for Azure App Service](../azure-app-configuration/policy-reference.md).
## May 2022
defender-for-iot How To Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-agent-configuration.md
To use a default property value, remove the property from the configuration obje
The following table contains the controllable properties of Defender for IoT security agents.
-Default values are available in the proper schema in [GitHub](https\://aka.ms/iot-security-module-default).
+Default values are available in the proper schema in [GitHub](https://aka.ms/iot-security-module-default).
| Name| Status | Valid values| Default values| Description | |-|--|--|-|-|
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
To send notifications:
For more information about forwarding rules, see [Forward alert information](how-to-forward-alert-information-to-partners.md). +
+## Upload and play PCAP files
+
+When troubleshooting, you may want to examine data recorded by a specific PCAP file. To do so, you can upload a PCAP file to your sensor console and replay the data recorded.
+
+To view the PCAP player in your sensor console, you'll first need to configure the relevant advanced configuration option.
+
+Maximum size for uploaded files is 2 GB.
+
+**To show the PCAP player in your sensor console**:
+
+1. On your sensor console, go to **System settings > Sensor management > Advanced Configurations**.
+
+1. In the **Advanced configurations** pane, select the **Pcaps** category.
+
+1. In the configurations displayed, change `enabled=0` to `enabled=1`, and select **Save**.
+
+The **Play PCAP** option is now available in the sensor console's settings, under: **System settings > Basic > Play PCAP**.
+
+**To upload and play a PCAP file**:
+
+1. On your sensor console, select **System settings > Basic > Play PCAP**.
+
+1. In the **PCAP PLAYER** pane, select **Upload** and then navigate to and select the file you want to upload.
+
+1. Select **Play** to play your PCAP file, or **Play All** to play all PCAP files currently loaded.
+
+> [!TIP]
+> Select **Clear All** to clear the sensor of all PCAP files loaded.
+ ## Adjust system properties System properties control various operations and settings in the sensor. Editing or modifying them might damage the operation of the sensor console.
To access system properties:
3. Select **System Properties** from the **General** section. + ## Next steps For more information, see:
defender-for-iot How To Work With Alerts On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-alerts-on-premises-management-console.md
Export alert information to a .csv file. You can export information of all alert
1. Select **Save**.
-You can learn more [About forwarded alert information](how-to-forward-alert-information-to-partners.md#about-forwarded-alert-information). You can also [Test forwarding rules](how-to-forward-alert-information-to-partners.md#test-forwarding-rules), or [Edit and delete forwarding rules](how-to-forward-alert-information-to-partners.md#edit-and-delete-forwarding-rules). You can also learn more about[Forwarding rules and alert exclusion rules](how-to-forward-alert-information-to-partners.md#forwarding-rules-and-alert-exclusion-rules).
+You can learn more [About forwarded alert information](how-to-forward-alert-information-to-partners.md#about-forwarded-alert-information). You can also [Test forwarding rules](how-to-forward-alert-information-to-partners.md#test-forwarding-rules), or [Edit and delete forwarding rules](how-to-forward-alert-information-to-partners.md#edit-and-delete-forwarding-rules). You can also learn more about [Forwarding rules and alert exclusion rules](how-to-forward-alert-information-to-partners.md#forwarding-rules-and-alert-exclusion-rules).
## Create alert exclusion rules
defender-for-iot Integrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-overview.md
+
+ Title: Integrations with partner services - Microsoft Defender for IoT
+description: Learn about supported integrations with Microsoft Defender for IoT.
Last updated : 06/21/2022+++
+# Integrations with partner services
+
+Integrate Microsoft Defender for Iot with partner services to view partner data in Defender for IoT, or to view Defender for IoT data in a partner service.
+
+## Supported integrations
+
+The following table lists available integrations for Microsoft Defender for IoT, as well as links for specific configuration information.
++
+|Partner service |Description | Learn more |
+||||
+|**Aruba ClearPass** | Share Defender for IoT data with ClearPass Security Exchange and update the ClearPass Policy Manager Endpoint Database with Defender for IoT data. | [Integrate ClearPass with Microsoft Defender for IoT](tutorial-clearpass.md) |
+|**CyberArk** | Send CyberArk PSM syslog data on remote sessions and verification failures to Defender for IoT for data correlation. | [Integrate CyberArk with Microsoft Defender for IoT](tutorial-cyberark.md) |
+|**Forescout** | Automate actions in Forescout based on activity detected by Defender for IoT, and correlate Defender for IoT data with other *Forescout eyeExtended* modules that oversee monitoring, incident management, and device control. | [Integrate Forescout with Microsoft Defender for IoT](tutorial-forescout.md) |
+|**Fortinet** | Send Defender for IoT data to Fortinet services for: <br><br>- Enhanced network visibility in FortiSIEM<br>- Extra abilities in FortiGate to stop anomalous behavior | [Integrate Fortinet with Microsoft Defender for IoT](tutorial-fortinet.md) |
+|**Palo Alto** |Use Defender for IoT data to block critical threats with Palo Alto firewalls, either with automatic blocking or with blocking recommendations. | [Integrate Palo-Alto with Microsoft Defender for IoT](tutorial-palo-alto.md) |
+|**QRadar** |Forward Defender for IoT alerts to IBM QRadar. | [Integrate Qradar with Microsoft Defender for IoT](tutorial-qradar.md) |
+|**ServiceNow** | View Defender for IoT device detections, attributes, and connections in ServiceNow. | [Integrate ServiceNow with Microsoft Defender for IoT](tutorial-servicenow.md) |
+| **Splunk** | Send Defender for IoT alerts to Splunk | [Integrate Splunk with Microsoft Defender for IoT](tutorial-splunk.md) |
+|**Axonius Cybersecurity Asset Management** | Import and manage device inventory discovered by Defender for IoT in your Axonius instance. | [Axonius documentation](https://docs.axonius.com/docs/azure-defender-for-iot) |
+
+## Next steps
+
+For more information, see:
+
+**Device inventory**:
+
+- [Use the Device inventory in the Azure portal](how-to-manage-device-inventory-for-organizations.md)
+- [Use the Device inventory in the OT sensor](how-to-investigate-sensor-detections-in-a-device-inventory.md)
+- [Use the Device inventory in the on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md)
+
+**Alerts**:
+
+- [View alerts in the Azure portal](how-to-manage-cloud-alerts.md)
+- [View alerts in the OT sensor](how-to-view-alerts.md)
+- [View alerts in the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see the [Microsoft Security Development Lifecycle practice
| Version | Date released | End support date | |--|--|--|
-| 22.1.5 | 06/2022 | 03/2022 |
+| 22.1.5 | 06/2022 | 03/2023 |
| 22.1.4 | 04/2022 | 12/2022 | | 22.1.3 | 03/2022 | 11/2022 | | 22.1.1 | 02/2022 | 10/2022 |
For more information, see the [Microsoft Security Development Lifecycle practice
| 10.5.3 | 10/2021 | 07/2022 | | 10.5.2 | 10/2021 | 07/2022 |
-## June
+## June 2022
**Sensor software version**: 22.1.5
digital-twins Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/overview.md
# Mandatory fields. Title: What is Azure Digital Twins?
-description: Overview of Azure Digital Twins, what the service comprises, and how it can be used in a wider cloud solution.
+description: Overview of the Azure Digital Twins IoT platform, including its features and value.
Previously updated : 03/24/2022 Last updated : 06/17/2022
*Azure Digital Twins* is a platform as a service (PaaS) offering that enables the creation of twin graphs based on digital models of entire environments, which could be buildings, factories, farms, energy networks, railways, stadiums, and moreΓÇöeven entire cities. These digital models can be used to gain insights that drive better products, optimized operations, reduced costs, and breakthrough customer experiences.
-Azure Digital Twins can be used to design a digital twin architecture that represents actual IoT devices in a wider cloud solution, and which connects to IoT Hub device twins to send and receive live data.
+Azure Digital Twins can be used to design a digital twin architecture that represents actual IoT devices in a wider cloud solution, and which connects to [IoT Hub](../iot-hub/iot-concepts-and-iot-hub.md) device twins to send and receive live data.
> [!NOTE] > IoT Hub **device twins** are different from Azure Digital Twins **digital twins**. While *IoT Hub device twins* are maintained by your IoT hub for each IoT device that you connect to it, *digital twins* in Azure Digital Twins can be representations of anything defined by digital models and instantiated within Azure Digital Twins. Take advantage of your domain expertise on top of Azure Digital Twins to build customized, connected solutions that: * Model any environment, and bring digital twins to life in a scalable and secure manner
-* Connect assets such as IoT devices and existing business systems
-* Use a robust event system to build dynamic business logic and data processing
-* Integrate with Azure data, analytics, and AI services to help you track the past and then predict the future
+* Connect assets such as IoT devices and existing business systems, using a robust event system to build dynamic business logic and data processing
+* Query the live execution environment to extract real-time insights from your twin graph
+* Build connected 3D visualizations of your environment that display business logic and twin data in context
+* Query historized environment data and integrate with other Azure data, analytics, and AI services to better track the past and predict the future
-## Open modeling language
+## Define your business environment
In Azure Digital Twins, you define the digital entities that represent the people, places, and things in your physical environment using custom twin types called [models](concepts-models.md).
-You can think of these model definitions as a specialized vocabulary to describe your business. For a building management solution, for example, you might define models such as Building, Floor, and Elevator. You can then create digital twins based on these models to represent your specific environment.
+You can think of these model definitions as a specialized vocabulary to describe your business. For a building management solution, for example, you might define a model that defines a Building type, a Floor type, and an Elevator type. Models are defined in a JSON-like language called [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md), and they describe types of entities according to their state properties, telemetry events, commands, components, and relationships. You can design your own model sets from scratch, or get started with a pre-existing set of [DTDL industry ontologies](concepts-ontologies.md) based on common vocabulary for your industry.
-*Models* are defined in a JSON-like language called [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md), and they describe twins by their state properties, telemetry events, commands, components, and relationships. Here are some other capabilities of models:
-* Models define semantic *relationships* between your entities so that you can connect your twins into a graph that reflects their interactions. You can think of the models as nouns in a description of your world, and the relationships as verbs.
-* You can specialize twins using model *inheritance*. One model can inherit from another.
-* You can design your own model sets from scratch, or get started with a pre-existing set of [DTDL industry ontologies](concepts-ontologies.md) based on common vocabulary for your industry.
+>[!TIP]
+>DTDL is also used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This compatibility helps you connect your Azure Digital Twins solution with other parts of the Azure ecosystem.
-DTDL is also used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This compatibility helps you connect your Azure Digital Twins solution with other parts of the Azure ecosystem.
+Once you've defined your data models, use them to create [digital twins](concepts-twins-graph.md) that represent each specific entity in your environment. For example, you might use the Building model definition to create several Building-type twins (Building 1, Building 2, and so on). You can also use the relationships in the model definitions to connect twins to each other, forming a conceptual graph.
-## Live execution environment
+You can view your Azure Digital Twins graph in [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md), which provides an interface to help you build and interact with your graph:
-Digital models in Azure Digital Twins are live, up-to-date representations of the real world. Using the relationships in your custom DTDL models, you'll connect twins into a live graph representing your environment.
-You can visualize your Azure Digital Twins graph in [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md), which provides the following interface for interacting with your graph:
+## Contextualize IoT and business system data
+Digital models in Azure Digital Twins are live, up-to-date representations of the real world.
+
+To keep digital twin properties current against your environment, you can use [IoT Hub](../iot-hub/about-iot-hub.md) to connect your solution to IoT and IoT Edge devices. These hub-managed devices are represented as part of your twin graph, and provide the data that drives your model. You can create a new IoT Hub to use with Azure Digital Twins, or [connect an existing IoT Hub](how-to-ingest-iot-hub-data.md) along with the devices it already manages.
+
+You can also drive Azure Digital Twins from other data sources, using [REST APIs](concepts-apis-sdks.md) or connectors to other Azure services like [Logic Apps](../logic-apps/logic-apps-overview.md). These methods can help you input data from business systems and incorporate them into your twin graph.
+
+Azure Digital Twins provides a rich event system to keep your graph current, including data processing that can be customized to match your business logic. You can connect external compute resources, such as [Azure Functions](../azure-functions/functions-overview.md), to drive this data processing in flexible, customized ways.
-Azure Digital Twins provides a rich event system to keep that graph current with data processing and business logic. You can connect external compute resources, such as [Azure Functions](../azure-functions/functions-overview.md), to drive this data processing in flexible, customized ways.
+## Query for environment insights
-You can also extract insights from the live execution environment, using Azure Digital Twins' powerful *query APIΓÇï*. The API lets you query with extensive search conditions, including property values, relationships, relationship properties, model information, and more. You can also combine queries, gathering a broad range of insights about your environment and answering custom questions that are important to you.
+Azure Digital Twins provides a powerful query APIΓÇï to help you extract insights from the live execution environment. The API can query with extensive search conditions, including property values, relationships, relationship properties, model information, and more. You can also combine queries, gathering a broad range of insights about your environment and answering custom questions that are important to you. For more details about the language used to craft these queries, see [Query language](concepts-query-language.md).
-## Input from IoT and business systems
+## Visualize environment in 3D Scenes Studio (preview)
-To keep the live execution environment of Azure Digital Twins up to date with the real world, you can use [IoT Hub](../iot-hub/about-iot-hub.md) to connect your solution to IoT and IoT Edge devices. These hub-managed devices are represented as part of your twin graph, and provide the data that drives your model.
+Azure Digital Twins [3D Scenes Studio (preview)](concepts-3d-scenes-studio.md) is an immersive visual 3D environment, where end users can monitor, diagnose, and investigate operational digital twin data with the visual context of 3D assets. With a digital twin graph and curated 3D model, subject matter experts can leverage the studio's low-code builder to map the 3D elements to digital twins in the Azure Digital Twins graph, and define UI interactivity and business logic for a 3D visualization of a business environment. The 3D scenes can then be consumed in the hosted 3D Scenes Studio, or in a custom application that leverages the embeddable 3D viewer component.
-You can create a new IoT Hub for this purpose with Azure Digital Twins, or [connect an existing IoT Hub](how-to-ingest-iot-hub-data.md) along with the devices it already manages.
+Here's an example of a scene in 3D Scenes Studio, showing how digital twin properties can be visualized with 3D elements:
-You can also drive Azure Digital Twins from other data sources, using REST APIs or connectors to other services like [Logic Apps](../logic-apps/logic-apps-overview.md).
-## Output data for storage and analytics
+## Share twin data to other Azure services
The data in your Azure Digital Twins model can be routed to downstream Azure services for more analytics or storage.
Here are some things you can do with event routes in Azure Digital Twins:
Flexible egress of data is another way that Azure Digital Twins can connect into a larger solution, and support your custom needs for continued work with these insights.
-## Azure Digital Twins in a solution context
+## Sample solution architecture
Azure Digital Twins is commonly used in combination with other Azure services as part of a larger IoT solution.
-A sample architecture of a complete solution using Azure Digital Twins may contain the following components:
+A possible architecture of a complete solution using Azure Digital Twins may contain the following components:
* The Azure Digital Twins service instance. This service stores your twin models and your twin graph with its state, and orchestrates event processing. * One or more client apps that drive the Azure Digital Twins instance by configuring models, creating topology, and extracting insights from the twin graph. * One or more external compute resources to process events generated by Azure Digital Twins, or connected data sources such as devices. One common way to provide compute resources is via [Azure Functions](../azure-functions/functions-overview.md). * An IoT hub to provide device management and IoT data stream capabilities. * Downstream services to provide things like workflow integration (like Logic Apps), cold storage (like Azure Data Lake), or analytics (like Azure Data Explorer or Time Series Insights).
-The following diagram shows where Azure Digital Twins lies in the context of a larger Azure IoT solution.
+The following diagram shows where Azure Digital Twins might lie in the context of a larger sample Azure IoT solution.
:::image type="content" source="media/overview/solution-context.png" alt-text="Diagram showing input sources, output services, and two-way communication with both client apps and external compute resources." border="false" lightbox="media/overview/solution-context.png":::
dms Create Dms Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-resource-manager-template.md
Write-Host "Press [ENTER] to continue..."
For a step-by-step tutorial that guides you through the process of creating a template, see: > [!div class="nextstepaction"]
-> [ Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
+> [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
For other ways to deploy Azure Database Migration Service, see: - [Azure portal](quickstart-create-data-migration-service-portal.md)
event-hubs Event Processor Balance Partition Load https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-processor-balance-partition-load.md
Partition ownership records in the checkpoint store keep track of Event Hubs nam
| | | : | | | | | mynamespace.servicebus.windows.net | myeventhub | myconsumergroup | 844bd8fb-1f3a-4580-984d-6324f9e208af | 15 | 2020-01-15T01:22:00 |
-Each event processor instance acquires ownership of a partition and starts processing the partition from last known [checkpoint](# Checkpointing). If a processor fails (VM shuts down), then other instances detect it by looking at the last modified time. Other instances try to get ownership of the partitions previously owned by the inactive instance, and the checkpoint store guarantees that only one of the instances succeeds in claiming ownership of a partition. So, at any given point of time, there is at most one processor that receives events from a partition.
+Each event processor instance acquires ownership of a partition and starts processing the partition from last known [checkpoint](#checkpointing). If a processor fails (VM shuts down), then other instances detect it by looking at the last modified time. Other instances try to get ownership of the partitions previously owned by the inactive instance, and the checkpoint store guarantees that only one of the instances succeeds in claiming ownership of a partition. So, at any given point of time, there is at most one processor that receives events from a partition.
## Receive messages
expressroute Cross Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/cross-network-connectivity.md
The following table shows the route table of the private peering of the ExpressR
The following table shows the route table of the private peering of the ExpressRoute of Fabrikam Inc., after configuring Global Reach. See that the route table has routes belonging to both the on-premises networks.
-![Fabrikam ExpressRoute route table after Global Reach]( ./media/cross-network-connectivity/fabrikamexr-rt-gr.png )
+![Fabrikam ExpressRoute route table after Global Reach](./media/cross-network-connectivity/fabrikamexr-rt-gr.png)
## Next steps
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
If you are remote and do not have fiber connectivity or you want to explore othe
| **[EdgeConnex](https://www.edgeconnex.com/services/edge-data-centers-proximity-matters/)** | Megaport, PacketFabric | | **[Flexential](https://www.flexential.com/connectivity/cloud-connect-microsoft-azure-expressroute)** | IX Reach, Megaport, PacketFabric | | **[QTS Data Centers](https://www.qtsdatacenters.com/hybrid-solutions/connectivity/azure-cloud )** | Megaport, PacketFabric |
-| **[Stream Data Centers]( https://www.streamdatacenters.com/products-services/network-cloud/ )** | Megaport |
+| **[Stream Data Centers](https://www.streamdatacenters.com/products-services/network-cloud/)** | Megaport |
| **[RagingWire Data Centers](https://www.ragingwire.com/wholesale/wholesale-data-centers-worldwide-nexcenters)** | IX Reach, Megaport, PacketFabric | | **[T5 Datacenters](https://t5datacenters.com/)** | IX Reach | | **vXchnge** | IX Reach, Megaport |
governance Policy As Code Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/policy-as-code-github.md
resources, the quickstart articles explain how to do so.
### Export Azure Policy objects from the Azure portal
+ > [!NOTE]
+ > Owner permissions are required at the scope of the policy objects being exported to GitHub.
+ To export a policy definition from Azure portal, follow these steps: 1. Launch the Azure Policy service in the Azure portal by clicking **All services**, then searching
hdinsight Hdinsight Go Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-go-sdk-overview.md
ms.devlang: golang Previously updated : 01/03/2020 Last updated : 06/23/2022 # HDInsight SDK for Go (Preview)
hdinsight Hdinsight Hadoop Create Linux Clusters Arm Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-arm-templates.md
description: Learn how to create clusters for HDInsight by using Resource Manage
Previously updated : 04/07/2020 Last updated : 06/23/2022 # Create Apache Hadoop clusters in HDInsight by using Resource Manager templates
In this article, you have learned several ways to create an HDInsight cluster. T
* For an in-depth example of deploying an application, see [Provision and deploy microservices predictably in Azure](../app-service/deploy-complex-application-predictably.md). * For guidance on deploying your solution to different environments, see [Development and test environments in Microsoft Azure](../devtest-labs/devtest-lab-overview.md). * To learn about the sections of the Azure Resource Manager template, see [Authoring templates](../azure-resource-manager/templates/syntax.md).
-* For a list of the functions you can use in an Azure Resource Manager template, see [Template functions](../azure-resource-manager/templates/template-functions.md).
+* For a list of the functions you can use in an Azure Resource Manager template, see [Template functions](../azure-resource-manager/templates/template-functions.md).
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
Fixed issues represent selected issues that were previously logged via Hortonwor
- **OMS Portal:** We have removed the link from HDInsight resource page that was pointing to OMS portal. Azure Monitor logs initially used its own portal called the OMS portal to manage its configuration and analyze collected data. All functionality from this portal has been moved to the Azure portal where it will continue to be developed. HDInsight has deprecated the support for OMS portal. Customers will use HDInsight Azure Monitor logs integration in Azure portal. -- **Spark 2.3**-
- - <https://spark.apache.org/releases/spark-release-2-3-0.html#deprecations>
+- **Spark 2.3:** [Spark Release 2.3.0 deprecations](https://spark.apache.org/releases/spark-release-2-3-0.html#deprecations)
### ΓÇïUpgrading
hdinsight Hdinsight Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-troubleshoot-guide.md
Title: Azure HDInsight troubleshooting guides
description: Troubleshoot Azure HDInsight. Step-by-step documentation shows you how to use HDInsight to solve common problems with Apache Hive, Apache Spark, Apache YARN, Apache HBase, HDFS, and Apache Storm. Previously updated : 08/14/2019 Last updated : 06/23/2022 # Troubleshoot Azure HDInsight
hdinsight Apache Spark Intellij Tool Failure Debug https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-intellij-tool-failure-debug.md
Previously updated : 07/12/2019 Last updated : 06/23/2022 # Failure spark job debugging with Azure Toolkit for IntelliJ (preview)
Create a spark ScalaΓÇï/Java application, then run the application on a Spark cl
### Manage resources * [Manage resources for the Apache Spark cluster in Azure HDInsight](apache-spark-resource-manager.md)
-* [Track and debug jobs running on an Apache Spark cluster in HDInsight](apache-spark-job-debugging.md)
+* [Track and debug jobs running on an Apache Spark cluster in HDInsight](apache-spark-job-debugging.md)
hdinsight Apache Spark Job Debugging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-job-debugging.md
description: Use YARN UI, Spark UI, and Spark History server to track and debug
Previously updated : 04/23/2020 Last updated : 06/23/2022 # Debug Apache Spark jobs running on Azure HDInsight
healthcare-apis Centers For Medicare Tutorial Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/centers-for-medicare-tutorial-introduction.md
The Patient Access API describes adherence to four FHIR implementation guides:
The Provider Directory API describes adherence to one implementation guide:
-* [HL7 Da Vinci PDex Plan Network IG](http://build.fhir.org/ig/HL7/davinci-pdex-plan-net/): This implementation guide defines a FHIR interface to a health insurerΓÇÖs insurance plans, their associated networks, and the organizations and providers that participate in these networks.
+* [HL7 Da Vinci PDex Plan Network IG](https://build.fhir.org/ig/HL7/davinci-pdex-plan-net/): This implementation guide defines a FHIR interface to a health insurerΓÇÖs insurance plans, their associated networks, and the organizations and providers that participate in these networks.
## Touchstone
healthcare-apis Store Profiles In Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/store-profiles-in-fhir.md
# Store profiles in Azure API for FHIR
-HL7 Fast Healthcare Interoperability Resources (FHIR&#174;) defines a standard and interoperable way to store and exchange healthcare data. Even within the base FHIR specification, it can be helpful to define other rules or extensions based on the context that FHIR is being used. For such context-specific uses of FHIR, **FHIR profiles** are used for the extra layer of specifications.
-[FHIR profile](https://www.hl7.org/fhir/profiling.html) allows you to narrow down and customize resource definitions using constraints and extensions.
+HL7 Fast Healthcare Interoperability Resources (FHIR&#174;) defines a standard and interoperable way to store and exchange healthcare data. Even within the base FHIR specification, it can be helpful to define other rules or extensions based on the context that FHIR is being used. For such context-specific uses of FHIR, **FHIR profiles** are used for the extra layer of specifications. [FHIR profile](https://www.hl7.org/fhir/profiling.html) allows you to narrow down and customize resource definitions using constraints and extensions.
Azure API for FHIR allows validating resources against profiles to see if the resources conform to the profiles. This article guides you through the basics of FHIR profiles and how to store them. For more information about FHIR profiles outside of this article, visit [HL7.org](https://www.hl7.org/fhir/profiling.html).
When a resource conforms to a profile, the profile is specified inside the `prof
> [!NOTE] > Profiles must build on top of the base resource and cannot conflict with the base resource. For example, if an element has a cardinality of 1..1, the profile cannot make it optional.
-Profiles are also specified by various Implementation Guides (IGs). Some common IGs are listed below. You can go to the specific IG site to learn more about the IG and the profiles defined within it.
+Profiles are also specified by various Implementation Guides (IGs). Some common IGs are listed below. For more information, visit the specific IG site to learn more about the IG and the profiles defined within it:
-|Name |URL
-|- |-
-Us Core |<https://www.hl7.org/fhir/us/core/>
-CARIN Blue Button |<http://hl7.org/fhir/us/carin-bb/>
-Da Vinci Payer Data Exchange |<http://hl7.org/fhir/us/davinci-pdex/>
-Argonaut |<http://www.fhir.org/guides/argonaut/pd/>
+- [US Core](https://www.hl7.org/fhir/us/core/)
+- [CARIN Blue Button](https://hl7.org/fhir/us/carin-bb)
+- [Da Vinci Payer Data Exchange](https://hl7.org/fhir/us/davinci-pdex)
+- [Argonaut](https://www.fhir.org/guides/argonaut/pd/)
> [!NOTE] > The Azure API for FHIR does not store any profiles from implementation guides by default. You will need to load them into the Azure API for FHIR.
healthcare-apis Centers For Medicare Tutorial Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/centers-for-medicare-tutorial-introduction.md
The Patient Access API describes adherence to four FHIR implementation guides:
The Provider Directory API describes adherence to one implementation guide:
-* [HL7 Da Vinci PDex Plan Network IG](http://build.fhir.org/ig/HL7/davinci-pdex-plan-net/): This implementation guide defines a FHIR interface to a health insurerΓÇÖs insurance plans, their associated networks, and the organizations and providers that participate in these networks.
+* [HL7 Da Vinci PDex Plan Network IG](https://build.fhir.org/ig/HL7/davinci-pdex-plan-net/): This implementation guide defines a FHIR interface to a health insurerΓÇÖs insurance plans, their associated networks, and the organizations and providers that participate in these networks.
## Touchstone
healthcare-apis Fhir Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-faq.md
We have a basic SMART on FHIR proxy as part of the managed service. If this does
### Can I create a custom FHIR resource?
-We don't allow custom FHIR resources. If you need a custom FHIR resource, you can build a custom resource on top of the [Basic resource](http://www.hl7.org/fhir/basic.html) with extensions.
+We don't allow custom FHIR resources. If you need a custom FHIR resource, you can build a custom resource on top of the [Basic resource](https://www.hl7.org/fhir/basic.html) with extensions.
### Are [extensions](https://www.hl7.org/fhir/extensibility.html) supported on the FHIR service?
healthcare-apis Store Profiles In Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/store-profiles-in-fhir.md
When a resource conforms to a profile, the profile is specified inside the `prof
> [!NOTE] > Profiles must build on top of the base resource and cannot conflict with the base resource. For example, if an element has a cardinality of 1..1, the profile cannot make it optional.
-Profiles are also specified by various Implementation Guides (IGs). Some common IGs are listed below. You can go to the specific IG site to learn more about the IG and the profiles defined within it.
-
-|Name |URL
-|- |-
-Us Core |<https://www.hl7.org/fhir/us/core/>
-CARIN Blue Button |<http://hl7.org/fhir/us/carin-bb/>
-Da Vinci Payer Data Exchange |<http://hl7.org/fhir/us/davinci-pdex/>
-Argonaut |<http://www.fhir.org/guides/argonaut/pd/>
+Profiles are also specified by various Implementation Guides (IGs). Some common IGs are listed below. For more information, visit the specific IG site to learn more about the IG and the profiles defined within it:
+
+- [US Core](https://www.hl7.org/fhir/us/core/)
+- [CARIN Blue Button](https://hl7.org/fhir/us/carin-bb)
+- [Da Vinci Payer Data Exchange](https://hl7.org/fhir/us/davinci-pdex)
+- [Argonaut](https://www.fhir.org/guides/argonaut/pd/)
> [!NOTE] > The FHIR service does not store any profiles from implementation guides by default. You will need to load them into the FHIR service.
healthcare-apis Healthcare Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-overview.md
Azure Health Data Services now includes support for DICOM service. DICOM enables
For the secure exchange of FHIR data, Azure Health Data Services offers a few incremental capabilities that aren't available in Azure API for FHIR.
-* **Support for transactions**: In Azure Health Data Services, the FHIR service supports transaction bundles. For more information about transaction bundles, visit [HL7.org](http://www.hl7.org/) and refer to batch/transaction interactions.
+* **Support for transactions**: In Azure Health Data Services, the FHIR service supports transaction bundles. For more information about transaction bundles, visit [HL7.org](https://www.hl7.org/) and refer to batch/transaction interactions.
* [Chained Search Improvements](./././fhir/overview-of-search.md#chained--reverse-chained-searching): Chained Search & Reserve Chained Search are no longer limited by 100 items per sub query. * The $convert-data operation can now transform JSON objects to FHIR R4. * Events: Trigger new workflows when resources are created, updated, or deleted in a FHIR service.
import-export Storage Import Export Data From Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-from-blobs.md
Perform the following steps to order an import job in Azure Import/Export via th
1. Select the **Destination country/region** for the job. 1. Then select **Apply**.
- [ ![Screenshot of Get Started options for a new export order in Azure Import/Export's Preview portal. The Export From Azure transfer type and the Apply button are highlighted.](./media/storage-import-export-data-from-blobs/import-export-order-preview-03-export-job.png) ](./media/storage-import-export-data-from-blobs/import-export-order-preview-03-export-job.png#lightbox)
+ [![Screenshot of Get Started options for a new export order in Azure Import/Export's Preview portal. The Export From Azure transfer type and the Apply button are highlighted.](./media/storage-import-export-data-from-blobs/import-export-order-preview-03-export-job.png)](./media/storage-import-export-data-from-blobs/import-export-order-preview-03-export-job.png#lightbox)
1. Choose the **Select** button for **Import/Export Job**.
Perform the following steps to order an import job in Azure Import/Export via th
You can select **Go to resource** to open the **Overview** of the job.
- [ ![Screenshot showing the Overview pane for an Azure Import Export job in Created state in the Preview portal.](./media/storage-import-export-data-from-blobs/import-export-order-preview-12-export-job.png) ](./media/storage-import-export-data-from-blobs/import-export-order-preview-12-export-job.png#lightbox)
+ [![Screenshot showing the Overview pane for an Azure Import Export job in Created state in the Preview portal.](./media/storage-import-export-data-from-blobs/import-export-order-preview-12-export-job.png)](./media/storage-import-export-data-from-blobs/import-export-order-preview-12-export-job.png#lightbox)
# [Portal (Classic)](#tab/azure-portal-classic) Perform the following steps to create an export job in the Azure portal using the classic Azure Import/Export service.
-1. Log on to <https://portal.azure.com/>.
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. Search for **import/export jobs**. ![Screenshot of the Search box at the top of the Azure Portal home page. A search key for the Import Export Jobs Service is entered in the Search box.](../../includes/media/storage-import-export-classic-import-steps/import-to-blob-1.png)
You can use the copy logs from the job to verify that all data transferred succe
To find the log locations, open the job in the [Azure portal/](https://portal.azure.com/). The **Data copy details** show the **Copy log path** and **Verbose log path** for each drive that was included in the order.
-[ ![Screenshot showing a completed export job in Azure Import Export. In Data Copy Details, the Copy Log Path and Verbose Log Path are highlighted.](./media/storage-import-export-data-from-blobs/import-export-status-export-order-completed.png) ](./media/storage-import-export-data-from-blobs/import-export-status-export-order-completed.png#lightbox)
+[![Screenshot showing a completed export job in Azure Import Export. In Data Copy Details, the Copy Log Path and Verbose Log Path are highlighted.](./media/storage-import-export-data-from-blobs/import-export-status-export-order-completed.png)](./media/storage-import-export-data-from-blobs/import-export-status-export-order-completed.png#lightbox)
At this time, you can delete the job or leave it. Jobs automatically get deleted after 90 days.
iot-central How To Connect Devices X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-devices-x509.md
Title: Connect devices with X.509 certificates in an Azure IoT Central applicati
description: How to connect devices with X.509 certificates using Node.js device SDK for IoT Central Application Previously updated : 06/30/2021 Last updated : 06/15/2022
iot-central Howto Administer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-administer.md
Title: Change Azure IoT Central application settings | Microsoft Docs
description: Learn how to manage your Azure IoT Central application by changing application name, URL, upload image, and delete an application Previously updated : 12/28/2021 Last updated : 06/22/2022
iot-central Howto Authorize Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-authorize-rest-api.md
Title: Authorize REST API in Azure IoT Central
description: How to authenticate and authorize IoT Central REST API calls Previously updated : 12/27/2021 Last updated : 06/22/2022
iot-central Howto Build Iotc Device Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-build-iotc-device-bridge.md
Previously updated : 12/21/2021 Last updated : 06/22/2022 custom: contperf-fy22q3
iot-central Howto Configure File Uploads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-configure-file-uploads.md
description: How to configure file uploads from your devices to the cloud. After
Previously updated : 12/22/2021 Last updated : 06/22/2022
iot-central Howto Configure Rules Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-configure-rules-advanced.md
Title: Use workflows to integrate your Azure IoT Central application with other
description: This how-to article shows you, as a builder, how to configure rules and actions that integrate your IoT Central application with other cloud services. To create an advanced rule, you use an IoT Central connector in either Power Automate or Azure Logic Apps. Previously updated : 12/21/2021 Last updated : 06/21/2022
iot-central Howto Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-configure-rules.md
Title: Configure rules and actions in Azure IoT Central | Microsoft Docs
description: This how-to article shows you, as a builder, how to configure telemetry-based rules and actions in your Azure IoT Central application. Previously updated : 12/27/2021 Last updated : 06/22/2022
iot-central Howto Connect Eflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-eflow.md
Title: Azure IoT Edge for Linux on Windows (EFLOW) with IoT Central | Microsoft
description: Learn how to connect Azure IoT Edge for Linux on Windows (EFLOW) with IoT Central Previously updated : 11/09/2021 Last updated : 06/16/2022
iot-central Howto Connect Rigado Cascade 500 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-rigado-cascade-500.md
-- Previously updated : 08/18/2021++ Last updated : 06/15/2022 # This article applies to solution builders.
iot-central Howto Control Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-control-devices-with-rest-api.md
Title: Use the REST API to manage devices in Azure IoT Central
description: How to use the IoT Central REST API to control devices in an application Previously updated : 12/28/2021 Last updated : 06/20/2022
iot-central Howto Create Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-analytics.md
Title: Analyze device data in your Azure IoT Central application | Microsoft Doc
description: Analyze device data in your Azure IoT Central application. Previously updated : 12/21/2021 Last updated : 06/21/2022
iot-central Howto Create And Manage Applications Csp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-and-manage-applications-csp.md
Previously updated : 08/28/2021 Last updated : 06/15/2022
iot-central Howto Create Custom Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-custom-analytics.md
Title: Extend Azure IoT Central with custom analytics | Microsoft Docs
description: As a solution developer, configure an IoT Central application to do custom analytics and visualizations. This solution uses Azure Databricks. Previously updated : 12/21/2021 Last updated : 06/21/2022
iot-central Howto Create Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-custom-rules.md
Title: Extend Azure IoT Central with custom rules and notifications | Microsoft
description: As a solution developer, configure an IoT Central application to send email notifications when a device stops sending telemetry. This solution uses Azure Stream Analytics, Azure Functions, and SendGrid. Previously updated : 12/21/2021 Last updated : 06/21/2022
iot-central Howto Create Iot Central Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-iot-central-application.md
Previously updated : 12/22/2021 Last updated : 06/20/2022
iot-central Howto Create Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-organizations.md
Previously updated : 12/27/2021 Last updated : 06/21/2022
iot-central Howto Edit Device Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-edit-device-template.md
Title: Edit a device template in your Azure IoT Central application | Microsoft
description: Iterate over your device templates without impacting your live connected devices Previously updated : 12/22/2021 Last updated : 06/22/2022
iot-central Howto Export Data Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-data-legacy.md
description: How to export data from your Azure IoT Central application to Azure
Previously updated : 01/06/2022 Last updated : 06/20/2022
iot-central Howto Manage Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-dashboards.md
Title: Create and manage Azure IoT Central dashboards | Microsoft Docs
description: Learn how to create and manage application and personal dashboards in Azure IoT Central. Previously updated : 12/28/2021 Last updated : 06/20/2022
iot-central Howto Manage Data Export With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-data-export-with-rest-api.md
Title: Use the REST API to manage data export in Azure IoT Central
description: How to use the IoT Central REST API to manage data export in an application Previously updated : 10/18/2021 Last updated : 06/15/2022
iot-central Howto Manage Device Templates With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md
Title: Use the REST API to add device templates in Azure IoT Central
description: How to use the IoT Central REST API to add device templates in an application Previously updated : 12/17/2021 Last updated : 06/17/2022
iot-central Howto Manage Devices In Bulk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-in-bulk.md
Previously updated : 12/27/2021 Last updated : 06/22/2022
iot-central Howto Manage Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-with-rest-api.md
Title: How to use the IoT Central REST API to manage devices
description: How to use the IoT Central REST API to add devices in an application Previously updated : 12/18/2021 Last updated : 06/22/2022
iot-central Howto Manage Iot Central From Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-from-cli.md
Previously updated : 12/27/2021 Last updated : 06/20/2022
iot-central Howto Manage Iot Central From Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-from-portal.md
Previously updated : 12/27/2021 Last updated : 06/22/2022
iot-central Howto Manage Iot Central With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-with-rest-api.md
Previously updated : 10/25/2021 Last updated : 06/15/2022
iot-central Howto Manage Jobs With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-jobs-with-rest-api.md
Title: Use the REST API to manage jobs in Azure IoT Central
description: How to use the IoT Central REST API to create and manage jobs in an application Previously updated : 01/05/2022 Last updated : 06/20/2022
iot-central Howto Manage Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-preferences.md
Title: Manage your personal preferences on IoT Central | Microsoft Docs
description: How to manage your personal application preferences such as changing language, theme, and default organization in your IoT Central application. Previously updated : 01/04/2022 Last updated : 06/22/2022
iot-central Howto Manage Users Roles With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles-with-rest-api.md
Title: Use the REST API to manage users and roles in Azure IoT Central
description: How to use the IoT Central REST API to manage users and roles in an application Previously updated : 08/30/2021 Last updated : 06/16/2022
iot-central Howto Manage Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles.md
Title: Manage users and roles in Azure IoT Central application | Microsoft Docs
description: As an administrator, how to manage users and roles in your Azure IoT Central application Previously updated : 12/22/2021 Last updated : 06/22/2022
iot-central Howto Map Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-map-data.md
Title: Transform telemetry on ingress to IoT Central | Microsoft Docs
description: To use complex telemetry from devices, you can use mappings to transform it as it arrives in your IoT Central application. This article describes how to map device telemetry on ingress to IoT Central. Previously updated : 11/22/2021 Last updated : 06/17/2022
iot-central Howto Monitor Devices Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-monitor-devices-azure-cli.md
Title: Monitor device connectivity using the Azure IoT Central Explorer
description: Monitor device messages and observe device twin changes through the IoT Central Explorer CLI. Previously updated : 08/30/2021 Last updated : 06/16/2022 ms.tool: azure-cli
iot-central Howto Query With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-query-with-rest-api.md
Title: Use the REST API to query devices in Azure IoT Central
description: How to use the IoT Central REST API to query devices in an application Previously updated : 10/12/2021 Last updated : 06/14/2022
iot-central Howto Set Up Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-set-up-template.md
Title: Define a new IoT device type in Azure IoT Central | Microsoft Docs
description: This article shows you how to create a new Azure IoT device template in your Azure IoT Central application. You define the telemetry, state, properties, and commands for your type. Previously updated : 12/22/2021 Last updated : 06/22/2022
iot-central Howto Transform Data Internally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data-internally.md
Title: Transform data inside Azure IoT Central | Microsoft Docs
description: IoT devices send data in various formats that you may need to transform. This article describes how to transform data in an IoT Central before exporting it. Previously updated : 10/28/2021 Last updated : 06/15/2022
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data.md
Title: Transform data for Azure IoT Central | Microsoft Docs
description: IoT devices send data in various formats that you may need to transform. This article describes how to transform data both on the way into IoT Central and on the way out. The scenarios described use IoT Edge and Azure Functions. Previously updated : 12/27/2021 Last updated : 06/24/2022
To set up the data export to send data to your Device bridge:
### Verify
-The sample device you use to test the scenario is written in Node.js. Make sure you have Node.js and NPM installed on your local machine. If you don't want to install these prerequisites, use the[Azure Cloud Shell](https://shell.azure.com/) that has them preinstalled.
+The sample device you use to test the scenario is written in Node.js. Make sure you have Node.js and NPM installed on your local machine. If you don't want to install these prerequisites, use the [Azure Cloud Shell](https://shell.azure.com/) that has them preinstalled.
To run a sample device that tests the scenario:
iot-central Howto Use Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-commands.md
Title: How to use device commands in an Azure IoT Central solution
description: How to use device commands in Azure IoT Central solution. This tutorial shows you how to use device commands in client app to your Azure IoT Central application. Previously updated : 12/27/2021 Last updated : 06/22/2022
iot-central Howto Use Location Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-location-data.md
Title: Use location data in an Azure IoT Central solution
description: Learn how to use location data sent from a device connected to your IoT Central application. Plot location data on a map or create geofencing rules. Previously updated : 12/27/2021 Last updated : 06/22/2022
iot-central Howto Use Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-properties.md
Title: Use properties in an Azure IoT Central solution
description: Learn how to use read-only and writable properties in an Azure IoT Central solution. Previously updated : 12/21/2021 Last updated : 06/21/2022
iot-edge How To Deploy At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-at-scale.md
The **Priority** and **Time to live** parameters are optional parameters that yo
For more information about how to create routes, see [Declare routes](module-composition.md#declare-routes).
+Select **Next: Target Devices**.
+
+### Step 4: Target devices
+
+Use the tags property from your devices to target the specific devices that should receive this deployment.
+
+Since multiple deployments may target the same device, you should give each deployment a priority number. If there's ever a conflict, the deployment with the highest priority (larger values indicate higher priority) wins. If two deployments have the same priority number, the one that was created most recently wins.
+
+If multiple deployments target the same device, then only the one with the higher priority is applied. If multiple layered deployments target the same device then they are all applied. However, if any properties are duplicated, like if there are two routes with the same name, then the one from the higher priority layered deployment overwrites the rest.
+
+Any layered deployment targeting a device must have a higher priority than the base deployment in order to be applied.
+
+1. Enter a positive integer for the deployment **Priority**.
+1. Enter a **Target condition** to determine which devices will be targeted with this deployment. The condition is based on device twin tags or device twin reported properties and should match the expression format. For example, `tags.environment='test'` or `properties.reported.devicemodel='4000x'`.
+ Select **Next: Metrics**.
-### Step 4: Metrics
+### Step 5: Metrics
Metrics provide summary counts of the various states that a device may report back as a result of applying configuration content.
Metrics provide summary counts of the various states that a device may report ba
WHERE properties.reported.lastDesiredStatus.code = 200 ```
-Select **Next: Target Devices**.
-
-### Step 5: Target devices
-
-Use the tags property from your devices to target the specific devices that should receive this deployment.
-
-Since multiple deployments may target the same device, you should give each deployment a priority number. If there's ever a conflict, the deployment with the highest priority (larger values indicate higher priority) wins. If two deployments have the same priority number, the one that was created most recently wins.
-
-If multiple deployments target the same device, then only the one with the higher priority is applied. If multiple layered deployments target the same device then they are all applied. However, if any properties are duplicated, like if there are two routes with the same name, then the one from the higher priority layered deployment overwrites the rest.
-
-Any layered deployment targeting a device must have a higher priority than the base deployment in order to be applied.
-
-1. Enter a positive integer for the deployment **Priority**.
-1. Enter a **Target condition** to determine which devices will be targeted with this deployment. The condition is based on device twin tags or device twin reported properties and should match the expression format. For example, `tags.environment='test'` or `properties.reported.devicemodel='4000x'`.
- Select **Next: Review + Create** to move on to the final step. ### Step 6: Review and create
iot-edge Iot Edge For Linux On Windows Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-support.md
Azure IoT Edge for Linux on Windows supports the following architectures:
For more information about Windows ARM64 supported processors, see [Windows Processor Requirements](/windows-hardware/design/minimum/windows-processor-requirements).
-## Virtual machines
+## Nested virtualization
-Azure IoT Edge for Linux on Windows can run in Windows virtual machines. Using a virtual machine as an IoT Edge device is common when customers want to augment existing infrastructure with edge intelligence. In order to run the EFLOW virtual machine inside a Windows VM, the host VM must support nested virtualization. There are two forms of nested virtualization compatible with Azure IoT Edge for Linux on Windows. Users can choose to deploy through a local VM or Azure VM. For more information, see [EFLOW Nested virtualization](./nested-virtualization.md).
+Azure IoT Edge for Linux on Windows (EFLOW) can run in Windows virtual machines. Using a virtual machine as an IoT Edge device is common when customers want to augment existing infrastructure with edge intelligence. In order to run the EFLOW virtual machine inside a Windows VM, the host VM must support nested virtualization. EFLOW supports the following nested virtualization scenarios:
+
+| Version | Hyper-V VM | Azure VM | VMware ESXi VM | Other Hypervisor |
+| - | -- | -- | -- | -- |
+| EFLOW 1.1 LTS | ![1.1LTS](./media/support/green-check.png) | ![1.1LTS](./media/support/green-check.png) | ![1.1LTS](./media/support/green-check.png) | - |
+| EFLOW Continuous Release (CR) ([Public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)) | ![CR](./media/support/green-check.png) | ![CR](./media/support/green-check.png) | ![CR](./media/support/green-check.png) | - |
+
+For more information, see [EFLOW Nested virtualization](./nested-virtualization.md).
### VMware virtual machine Azure IoT Edge for Linux on Windows supports running inside a Windows virtual machine running on top of [VMware ESXi](https://www.vmware.com/products/esxi-and-esx.html) product family. Specific networking and virtualization configurations are needed to support this scenario. For more information about VMware configuration, see [EFLOW Nested virtualization](./nested-virtualization.md).
iot-hub Iot Hub Automatic Device Management Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-automatic-device-management-cli.md
Automatic device management works by updating a set of device twins or module tw
* The **metrics** define the summary counts of various configuration states such as **Success**, **In Progress**, and **Error**. Custom metrics are specified as queries on twin reported properties. System metrics are the default metrics that measure twin update status, such as the number of twins that are targeted and the number of twins that have been successfully updated.
-Automatic configurations run for the first time shortly after the configuration is created and then at five minute intervals. Metrics queries run each time the automatic configuration runs.
+Automatic configurations run for the first time shortly after the configuration is created and then at five minute intervals. Metrics queries run each time the automatic configuration runs. A maximum of 100 automatic configurations is supported on standard tier IoT hubs; ten on free tier IoT hubs. Throttling limits also apply. To learn more, see [Quotas and Throttling](iot-hub-devguide-quotas-throttling.md).
## CLI prerequisites
Metric queries for modules are also similar to queries for devices, but you sele
## Create a configuration
-You configure target devices by creating a configuration that consists of the target content and metrics.
+You can create a maximum of 100 automatic configurations on standard tier IoT hubs; ten on free tier IoT hubs. To learn more, see [Quotas and Throttling](iot-hub-devguide-quotas-throttling.md).
-Use the following command to create a configuration:
+You configure target devices by creating a configuration that consists of the target content and metrics. Use the following command to create a configuration:
```azurecli az iot hub configuration create --config-id [configuration id] \
iot-hub Iot Hub Automatic Device Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-automatic-device-management.md
Automatic device management works by updating a set of device twins or module tw
* The **metrics** define the summary counts of various configuration states such as **Success**, **In Progress**, and **Error**. Custom metrics are specified as queries on twin reported properties. System metrics are the default metrics that measure twin update status, such as the number of twins that are targeted and the number of twins that have been successfully updated.
-Automatic configurations run for the first time shortly after the configuration is created and then at five minute intervals. Metrics queries run each time the automatic configuration runs.
+Automatic configurations run for the first time shortly after the configuration is created and then at five minute intervals. Metrics queries run each time the automatic configuration runs. A maximum of 100 automatic configurations is supported on standard tier IoT hubs; ten on free tier IoT hubs. Throttling limits also apply. To learn more, see [Quotas and Throttling](iot-hub-devguide-quotas-throttling.md).
## Implement twins
Before you create a configuration, you must specify which devices or modules you
## Create a configuration
+You can create a maximum of 100 automatic configurations on standard tier IoT hubs; ten on free tier IoT hubs. To learn more, see [Quotas and Throttling](iot-hub-devguide-quotas-throttling.md).
+ 1. In the [Azure portal](https://portal.azure.com), go to your IoT hub. 2. Select **Configurations** in the left navigation pane.
iot-hub Tutorial X509 Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-scripts.md
Microsoft provides PowerShell and Bash scripts to help you understand how to cre
### Step 1 - Setup
-Get OpenSSL for Windows. See <https://www.openssl.org/docs/faq.html#MISC4> for places to download it or <https://www.openssl.org/source/> to build from source. Then run the preliminary scripts:
+Download [OpenSSL for Windows](https://www.openssl.org/docs/faq.html#MISC4) or [build it from source](https://www.openssl.org/source/). Then run the preliminary scripts:
1. Copy the scripts from this GitHub [repository](https://github.com/Azure/azure-iot-sdk-c/tree/master/tools/CACertificates) into the local directory in which you want to work. All files will be created as children of this directory.
lab-services Create And Configure Labs Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/create-and-configure-labs-admin.md
+
+ Title: Configure regions for labs
+description: Learn how to change the region of a lab.
++++ Last updated : 06/17/2022+++
+# Configure regions for labs
+
+This article shows you how to configure the locations where you can create labs by enabling or disabling regions associated with the lab plan. Enabling a region allows lab creators to create labs within that region. You cannot create labs in disabled regions.
+
+When you create a lab plan, you have to set an initial region for the labs, but you can enable or disable more regions for your lab at any time. If you create a lab plan by using the Azure portal, enable regions initially includes the same region as the location of the lab plan. If you create a lab plan by using the API or SDKs, you must set the AllowedRegion property when you create the lab plan.
+
+You might need to change the region for your labs in these circumstances:
+- Regulatory compliance. Choosing where your data resides, such as organizations choosing specific regions to help ensure that they are compliant with local regulations.
+- Service availability. Providing the optimal lab experience for your students by ensuring the Azure Lab Service is available in the region closest to them. For more information about service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=lab-services).
+- New region. You may acquire quota in a region different than the regions already enabled.
+
+## Prerequisites
+
+- To perform these steps, you must have an existing lab plan.
+
+## Enable regions
+
+To enable one or more regions after lab creation, follow these steps:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your lab plan.
+1. On the Lab plan overview page, select **Lab settings** from the left menu or select **Adjust settings**. Both go to the Lab settings page.
+ :::image type="content" source="./media/create-and-configure-labs-educator/lab-plan-overview-page.png" alt-text="Screenshot that shows the Lab overview page with Lab settings and Adjust settings highlighted.":::
+1. On the Lab settings page, under **Location selection**, select **Select regions**.
+ :::image type="content" source="./media/create-and-configure-labs-educator/lab-settings-page.png" alt-text="Screenshot that shows the Lab settings page with Select regions highlighted.":::
+1. On the Enabled regions page, select the region(s) you want to add, select **Enable**.
+ :::image type="content" source="./media/create-and-configure-labs-educator/enabled-regions-page.png" alt-text="Screenshot that shows the Enabled regions page with Enable and Apply highlighted.":::
+1. Enabled regions are displayed at the top of the list. Check that your desired regions are displayed and then select **Apply** to confirm your selection.
+ > [!NOTE]
+ > There are two steps to saving your enabled regions:
+ > - At the top of the regions list select **Enable**
+ > - At the bottom of the page, select **Apply**
+1. On the Lab settings page, verify that the regions you enabled are listed and then select **Save**.
+ :::image type="content" source="./media/create-and-configure-labs-educator/newly-enabled-regions.png" alt-text="Screenshot that shows the Lab settings page with the newly selected regions highlighted.":::
+ > [!NOTE]
+ > You must select **Save** to save your lab plan configuration.
+
+## Disable regions
+
+To disable one or more regions after lab creation, follow these steps:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your lab plan.
+1. On the Lab plan overview page, select **Lab settings** from the left menu or select **Adjust settings**. Both go to the Lab settings page.
+ :::image type="content" source="./media/create-and-configure-labs-educator/lab-plan-overview-page.png" alt-text="Screenshot that shows the Lab overview page with Lab settings and Adjust settings highlighted.":::
+1. On the Lab settings page, under **Location selection**, select **Select regions**.
+ :::image type="content" source="./media/create-and-configure-labs-educator/lab-settings-page.png" alt-text="Screenshot that shows the Lab settings page with Select regions highlighted.":::
+1. On the Enabled regions page, clear the check box of the region(s) you want to disable, select **Disable**.
+ :::image type="content" source="./media/create-and-configure-labs-educator/disable-regions-page.png" alt-text="Screenshot that shows the Enabled regions page with Disable and Apply highlighted.":::
+1. Enabled regions are displayed at the top of the list. Check that your desired regions are displayed and then select **Apply** to confirm your selection.
+ > [!NOTE]
+ > There are two steps to saving your disabled regions:
+ > - At the top of the regions list select **Disable**
+ > - At the bottom of the page, select **Apply**
+1. On the Lab settings page, verify that the regions you enabled are listed and then select **Save**.
+ :::image type="content" source="./media/create-and-configure-labs-educator/newly-enabled-regions.png" alt-text="Screenshot that shows the Lab settings page with the newly selected regions highlighted.":::
+ > [!NOTE]
+ > You must select **Save** to save your lab plan configuration.
+
+## Next steps
+
+- Learn how to choose the right regions for your Lab plan at [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/#overview).
+- Check [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=lab-services) For Azure Lab Services availability near you.
lab-services How To Configure Student Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-student-usage.md
When students use the registration link to sign in to a classroom, they're promp
Here's a link for students to [sign up for a Microsoft account](http://signup.live.com). > [!IMPORTANT]
-> When students sign in to a lab, they aren't given the option to create a Microsoft account. For this reason, we recommend that you include this sign-up link, <http://signup.live.com>, in the lab registration email that you send to students who are using non-Microsoft accounts.
+> When students sign in to a lab, they aren't given the option to create a Microsoft account. For this reason, we recommend that you include this sign-up link, `http://signup.live.com`, in the lab registration email that you send to students who are using non-Microsoft accounts.
### Use a GitHub account
load-balancer Load Balancer Standard Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-availability-zones.md
For an internal load balancer frontend, add a **zones** parameter to the interna
## Non-Zonal
-Load Balancers can also be created in a non-zonal configuration by use of a "no-zone" frontend (Public IP or Public IP Prefix). This option does not give a guarantee of redundancy. Note that all Public IP addresses that are [upgraded](../virtual-network/ip-services/public-ip-upgrade-portal.md) will be of type "no-zone".
+Load Balancers can also be created in a non-zonal configuration by use of a "no-zone" frontend (a public IP or public IP prefix in the case of a public load balancer; a private IP in the case of an internal load balancer). This option does not give a guarantee of redundancy. Note that all public IP addresses that are [upgraded](../virtual-network/ip-services/public-ip-upgrade-portal.md) will be of type "no-zone".
## <a name="design"></a> Design considerations
Using multiple frontends allow you to load balance traffic on more than one port
### Transition between regional zonal models
-In the case where a region is augmented to have [availability zones](../availability-zones/az-overview.md), any existing Public IPs (e.g., used for Load Balancer frontends) would remain non-zonal. In order to ensure your architecture can take advantage of the new zones, it is recommended that new frontend IPs be created, and the appropriate rules and configurations be replicated to utilize these new public IPs.
+In the case where a region is augmented to have [availability zones](../availability-zones/az-overview.md), any existing IPs (e.g., used for load balancer frontends) would remain non-zonal. In order to ensure your architecture can take advantage of the new zones, it is recommended that new frontend IPs be created, and the appropriate rules and configurations be replicated to utilize these new IPs.
### Control vs data plane implications
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
These settings can be applied to the best model as a result of your automated ML
|**Enable/disable ONNX model compatibility**|Γ£ô|| |**Test the model** | Γ£ô| Γ£ô (preview)|
-### Run control settings
+### Job control settings
-These settings allow you to review and control your experiment runs and its child runs.
+These settings allow you to review and control your experiment jobs and its child jobs.
| |The Python SDK|The studio web experience| |-|:-:|:-:|
-|**Run summary table**| Γ£ô|Γ£ô|
-|**Cancel runs & child runs**| Γ£ô|Γ£ô|
+|**Job summary table**| Γ£ô|Γ£ô|
+|**Cancel jobs & child jobs**| Γ£ô|Γ£ô|
|**Get guardrails**| Γ£ô|Γ£ô| ## When to use AutoML: classification, regression, forecasting, computer vision & NLP
With this capability you can:
* Download or deploy the resulting model as a web service in Azure Machine Learning. * Operationalize at scale, leveraging Azure Machine Learning [MLOps](concept-model-management-and-deployment.md) and [ML Pipelines](concept-ml-pipelines.md) capabilities.
-Authoring AutoML models for vision tasks is supported via the Azure ML Python SDK. The resulting experimentation runs, models, and outputs can be accessed from the Azure Machine Learning studio UI.
+Authoring AutoML models for vision tasks is supported via the Azure ML Python SDK. The resulting experimentation jobs, models, and outputs can be accessed from the Azure Machine Learning studio UI.
Learn how to [set up AutoML training for computer vision models](how-to-auto-train-image-models.md).
Instance segmentation | Tasks to identify objects in an image at the pixel level
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
-Support for natural language processing (NLP) tasks in automated ML allows you to easily generate models trained on text data for text classification and named entity recognition scenarios. Authoring automated ML trained NLP models is supported via the Azure Machine Learning Python SDK. The resulting experimentation runs, models, and outputs can be accessed from the Azure Machine Learning studio UI.
+Support for natural language processing (NLP) tasks in automated ML allows you to easily generate models trained on text data for text classification and named entity recognition scenarios. Authoring automated ML trained NLP models is supported via the Azure Machine Learning Python SDK. The resulting experimentation jobs, models, and outputs can be accessed from the Azure Machine Learning studio UI.
The NLP capability supports:
Using **Azure Machine Learning**, you can design and run your automated ML train
1. **Configure the compute target for model training**, such as your [local computer, Azure Machine Learning Computes, remote VMs, or Azure Databricks](how-to-set-up-training-targets.md). 1. **Configure the automated machine learning parameters** that determine how many iterations over different models, hyperparameter settings, advanced preprocessing/featurization, and what metrics to look at when determining the best model.
-1. **Submit the training run.**
+1. **Submit the training job.**
1. **Review the results**
The following diagram illustrates this process.
![Automated Machine learning](./media/concept-automated-ml/automl-concept-diagram2.png)
-You can also inspect the logged run information, which [contains metrics](how-to-understand-automated-ml.md) gathered during the run. The training run produces a Python serialized object (`.pkl` file) that contains the model and data preprocessing.
+You can also inspect the logged job information, which [contains metrics](how-to-understand-automated-ml.md) gathered during the job. The training job produces a Python serialized object (`.pkl` file) that contains the model and data preprocessing.
While model building is automated, you can also [learn how important or relevant features are](./v1/how-to-configure-auto-train-v1.md#explain) to the generated models.
The web interface for automated ML always uses a remote [compute target](concept
### Choose compute target Consider these factors when choosing your compute target:
- * **Choose a local compute**: If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child run), training on your local computer might be a better choice. There is no setup time, the infrastructure resources (your PC or VM) are directly available.
- * **Choose a remote ML compute cluster**: If you are training with larger datasets like in production training creating models which need longer trains, remote compute will provide much better end-to-end time performance because `AutoML` will parallelize trains across the cluster's nodes. On a remote compute, the start-up time for the internal infrastructure will add around 1.5 minutes per child run, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running.
+ * **Choose a local compute**: If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child job), training on your local computer might be a better choice. There is no setup time, the infrastructure resources (your PC or VM) are directly available.
+ * **Choose a remote ML compute cluster**: If you are training with larger datasets like in production training creating models which need longer trains, remote compute will provide much better end-to-end time performance because `AutoML` will parallelize trains across the cluster's nodes. On a remote compute, the start-up time for the internal infrastructure will add around 1.5 minutes per child job, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running.
### Pros and cons
Consider these pros and cons when choosing to use local vs. remote.
| | Pros (Advantages) |Cons (Handicaps) | |||||
-|**Local compute target** | <li> No environment start-up time | <li> Subset of features<li> Can't parallelize runs <li> Worse for large data. <li>No data streaming while training <li> No DNN-based featurization <li> Python SDK only |
-|**Remote ML compute clusters**| <li> Full set of features <li> Parallelize child runs <li> Large data support<li> DNN-based featurization <li> Dynamic scalability of compute cluster on demand <li> No-code experience (web UI) also available | <li> Start-up time for cluster nodes <li> Start-up time for each child run |
+|**Local compute target** | <li> No environment start-up time | <li> Subset of features<li> Can't parallelize jobs <li> Worse for large data. <li>No data streaming while training <li> No DNN-based featurization <li> Python SDK only |
+|**Remote ML compute clusters**| <li> Full set of features <li> Parallelize child jobs <li> Large data support<li> DNN-based featurization <li> Dynamic scalability of compute cluster on demand <li> No-code experience (web UI) also available | <li> Start-up time for cluster nodes <li> Start-up time for each child job |
### Feature availability
More features are available when you use the remote compute, as shown in the tab
| Out-of-the-box GPU support (training and inference) | Γ£ô | | | Image Classification and Labeling support | Γ£ô | | | Auto-ARIMA, Prophet and ForecastTCN models for forecasting | Γ£ô | |
-| Multiple runs/iterations in parallel | Γ£ô | |
+| Multiple jobs/iterations in parallel | Γ£ô | |
| Create models with interpretability in AutoML studio web experience UI | Γ£ô | | | Feature engineering customization in studio web experience UI| Γ£ô | | | Azure ML hyperparameter tuning | Γ£ô | | | Azure ML Pipeline workflow support | Γ£ô | |
-| Continue a run | Γ£ô | |
+| Continue a job | Γ£ô | |
| Forecasting | Γ£ô | Γ£ô | | Create and run experiments in notebooks | Γ£ô | Γ£ô | | Register and visualize experiment's info and metrics in UI | Γ£ô | Γ£ô |
To help confirm that such bias isn't applied to the final recommended model, aut
Learn how to [configure AutoML experiments to use test data (preview) with the SDK](how-to-configure-cross-validation-data-splits.md#provide-test-data-preview) or with the [Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment).
-You can also [test any existing automated ML model (preview)](./v1/how-to-configure-auto-train-v1.md#test-existing-automated-ml-model)), including models from child runs, by providing your own test data or by setting aside a portion of your training data.
+You can also [test any existing automated ML model (preview)](./v1/how-to-configure-auto-train-v1.md#test-existing-automated-ml-model)), including models from child jobs, by providing your own test data or by setting aside a portion of your training data.
## Feature engineering
Enable this setting with:
## <a name="ensemble"></a> Ensemble models
-Automated machine learning supports ensemble models, which are enabled by default. Ensemble learning improves machine learning results and predictive performance by combining multiple models as opposed to using single models. The ensemble iterations appear as the final iterations of your run. Automated machine learning uses both voting and stacking ensemble methods for combining models:
+Automated machine learning supports ensemble models, which are enabled by default. Ensemble learning improves machine learning results and predictive performance by combining multiple models as opposed to using single models. The ensemble iterations appear as the final iterations of your job. Automated machine learning uses both voting and stacking ensemble methods for combining models:
* **Voting**: predicts based on the weighted average of predicted class probabilities (for classification tasks) or predicted regression targets (for regression tasks). * **Stacking**: stacking combines heterogenous models and trains a meta-model based on the output from the individual models. The current default meta-models are LogisticRegression for classification tasks and ElasticNet for regression/forecasting tasks.
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md
The compute resources you use for your compute targets are attached to a [worksp
## <a name="train"></a> Training compute targets
-Azure Machine Learning has varying support across different compute targets. A typical model development lifecycle starts with development or experimentation on a small amount of data. At this stage, use a local environment like your local computer or a cloud-based VM. As you scale up your training on larger datasets or perform [distributed training](how-to-train-distributed-gpu.md), use Azure Machine Learning compute to create a single- or multi-node cluster that autoscales each time you submit a run. You can also attach your own compute resource, although support for different scenarios might vary.
+Azure Machine Learning has varying support across different compute targets. A typical model development lifecycle starts with development or experimentation on a small amount of data. At this stage, use a local environment like your local computer or a cloud-based VM. As you scale up your training on larger datasets or perform [distributed training](how-to-train-distributed-gpu.md), use Azure Machine Learning compute to create a single- or multi-node cluster that autoscales each time you submit a job. You can also attach your own compute resource, although support for different scenarios might vary.
[!INCLUDE [aml-compute-target-train](../../includes/aml-compute-target-train.md)]
-Learn more about how to [submit a training run to a compute target](how-to-set-up-training-targets.md).
+Learn more about how to [submit a training job to a compute target](how-to-set-up-training-targets.md).
## <a name="deploy"></a> Compute targets for inference
When created, these compute resources are automatically part of your workspace,
|Capability |Compute cluster |Compute instance | |||| |Single- or multi-node cluster | **&check;** | Single node cluster |
-|Autoscales each time you submit a run | **&check;** | |
+|Autoscales each time you submit a job | **&check;** | |
|Automatic cluster management and job scheduling | **&check;** | **&check;** | |Support for both CPU and GPU resources | **&check;** | **&check;** |
machine-learning Concept Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-customer-managed-keys.md
Azure Machine Learning is built on top of multiple Azure services. While the dat
In addition to customer-managed keys, Azure Machine Learning also provides a [hbi_workspace flag](/python/api/azureml-core/azureml.core.workspace%28class%29#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basicfriendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--exist-ok-false--show-output-true-). Enabling this flag reduces the amount of data Microsoft collects for diagnostic purposes and enables [extra encryption in Microsoft-managed environments](../security/fundamentals/encryption-atrest.md). This flag also enables the following behaviors: * Starts encrypting the local scratch disk in your Azure Machine Learning compute cluster, provided you havenΓÇÖt created any previous clusters in that subscription. Else, you need to raise a support ticket to enable encryption of the scratch disk of your compute clusters.
-* Cleans up your local scratch disk between runs.
+* Cleans up your local scratch disk between jobs.
* Securely passes credentials for your storage account, container registry, and SSH account from the execution layer to your compute clusters using your key vault. > [!TIP]
The following resources store metadata for your workspace:
| Service | How itΓÇÖs used | | -- | -- |
-| Azure Cosmos DB | Stores run history data. |
+| Azure Cosmos DB | Stores job history data. |
| Azure Cognitive Search | Stores indices that are used to help query your machine learning content. | | Azure Storage Account | Stores other metadata such as Azure Machine Learning pipelines data. |
Azure Machine Learning uses compute resources to train and deploy machine learni
| Azure Machine Learning compute cluster | OS disk encrypted in Azure Storage with Microsoft-managed keys. Temporary disk is encrypted if the `hbi_workspace` flag is enabled for the workspace. | **Compute cluster**
-The OS disk for each compute node stored in Azure Storage is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. This compute target is ephemeral, and clusters are typically scaled down when no runs are queued. The underlying virtual machine is de-provisioned, and the OS disk is deleted. Azure Disk Encryption isn't supported for the OS disk.
+The OS disk for each compute node stored in Azure Storage is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. This compute target is ephemeral, and clusters are typically scaled down when no jobs are queued. The underlying virtual machine is de-provisioned, and the OS disk is deleted. Azure Disk Encryption isn't supported for the OS disk.
-Each virtual machine also has a local temporary disk for OS operations. If you want, you can use the disk to stage training data. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the temporary disk is encrypted. This environment is short-lived (only during your run) and encryption support is limited to system-managed keys only.
+Each virtual machine also has a local temporary disk for OS operations. If you want, you can use the disk to stage training data. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the temporary disk is encrypted. This environment is short-lived (only during your job) and encryption support is limited to system-managed keys only.
**Compute instance** The OS disk for compute instance is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the local temporary disk on compute instance is encrypted with Microsoft managed keys. Customer managed key encryption isn't supported for OS and temp disk.
machine-learning Concept Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-encryption.md
This process allows you to encrypt both the Data and the OS Disk of the deployed
### Machine Learning Compute **Compute cluster**
-The OS disk for each compute node stored in Azure Storage is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. This compute target is ephemeral, and clusters are typically scaled down when no runs are queued. The underlying virtual machine is de-provisioned, and the OS disk is deleted. Azure Disk Encryption isn't supported for the OS disk.
+The OS disk for each compute node stored in Azure Storage is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. This compute target is ephemeral, and clusters are typically scaled down when no jobs are queued. The underlying virtual machine is de-provisioned, and the OS disk is deleted. Azure Disk Encryption isn't supported for the OS disk.
-Each virtual machine also has a local temporary disk for OS operations. If you want, you can use the disk to stage training data. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the temporary disk is encrypted. This environment is short-lived (only for the duration of your run,) and encryption support is limited to system-managed keys only.
+Each virtual machine also has a local temporary disk for OS operations. If you want, you can use the disk to stage training data. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the temporary disk is encrypted. This environment is short-lived (only for the duration of your job,) and encryption support is limited to system-managed keys only.
**Compute instance** The OS disk for compute instance is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the local temporary disk on compute instance is encrypted with Microsoft managed keys. Customer managed key encryption is not supported for OS and temp disk.
To secure external calls made to the scoring endpoint, Azure Machine Learning us
Microsoft may collect non-user identifying information like resource names (for example the dataset name, or the machine learning experiment name), or job environment variables for diagnostic purposes. All such data is stored using Microsoft-managed keys in storage hosted in Microsoft owned subscriptions and follows [Microsoft's standard Privacy policy and data handling standards](https://privacy.microsoft.com/privacystatement). This data is kept within the same region as your workspace.
-Microsoft also recommends not storing sensitive information (such as account key secrets) in environment variables. Environment variables are logged, encrypted, and stored by us. Similarly when naming [run_id](/python/api/azureml-core/azureml.core.run%28class%29), avoid including sensitive information such as user names or secret project names. This information may appear in telemetry logs accessible to Microsoft Support engineers.
+Microsoft also recommends not storing sensitive information (such as account key secrets) in environment variables. Environment variables are logged, encrypted, and stored by us. Similarly when naming your jobs, avoid including sensitive information such as user names or secret project names. This information may appear in telemetry logs accessible to Microsoft Support engineers.
You may opt out from diagnostic data being collected by setting the `hbi_workspace` parameter to `TRUE` while provisioning the workspace. This functionality is supported when using the AzureML Python SDK, the Azure CLI, REST APIs, or Azure Resource Manager templates.
machine-learning Concept Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-designer.md
Use a visual canvas to build an end-to-end machine learning workflow. Train, tes
+ Drag-and-drop [datasets](#datasets) and [components](#component) onto the canvas. + Connect the components to create a [pipeline draft](#pipeline-draft).
-+ Submit a [pipeline run](#pipeline-run) using the compute resources in your Azure Machine Learning workspace.
++ Submit a [pipeline run](#pipeline-job) using the compute resources in your Azure Machine Learning workspace. + Convert your **training pipelines** to **inference pipelines**. + [Publish](#publish) your pipelines to a REST **pipeline endpoint** to submit a new pipeline that runs with different parameters and datasets. + Publish a **training pipeline** to reuse a single pipeline to train multiple models while changing parameters and datasets.
A valid pipeline has these characteristics:
* All input ports for components must have some connection to the data flow. * All required parameters for each component must be set.
-When you're ready to run your pipeline draft, you submit a pipeline run.
+When you're ready to run your pipeline draft, you submit a pipeline job.
-### Pipeline run
+### Pipeline job
-Each time you run a pipeline, the configuration of the pipeline and its results are stored in your workspace as a **pipeline run**. You can go back to any pipeline run to inspect it for troubleshooting or auditing. **Clone** a pipeline run to create a new pipeline draft for you to edit.
+Each time you run a pipeline, the configuration of the pipeline and its results are stored in your workspace as a **pipeline job**. You can go back to any pipeline job to inspect it for troubleshooting or auditing. **Clone** a pipeline job to create a new pipeline draft for you to edit.
-Pipeline runs are grouped into [experiments](v1/concept-azure-machine-learning-architecture.md#experiments) to organize run history. You can set the experiment for every pipeline run.
+Pipeline jobs are grouped into [experiments](v1/concept-azure-machine-learning-architecture.md#experiments) to organize job history. You can set the experiment for every pipeline job.
## Datasets
To learn how to deploy your model, see [Tutorial: Deploy a machine learning mode
## Publish
-You can also publish a pipeline to a **pipeline endpoint**. Similar to an online endpoint, a pipeline endpoint lets you submit new pipeline runs from external applications using REST calls. However, you cannot send or receive data in real time using a pipeline endpoint.
+You can also publish a pipeline to a **pipeline endpoint**. Similar to an online endpoint, a pipeline endpoint lets you submit new pipeline jobs from external applications using REST calls. However, you cannot send or receive data in real time using a pipeline endpoint.
Published pipelines are flexible, they can be used to train or retrain models, [perform batch inferencing](how-to-run-batch-predictions-designer.md), process new data, and much more. You can publish multiple pipelines to a single pipeline endpoint and specify which pipeline version to run.
machine-learning Concept Enterprise Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-enterprise-security.md
Here's the authentication process for Azure Machine Learning using multi-factor
1. The client signs in to Azure AD and gets an Azure Resource Manager token. 1. The client presents the token to Azure Resource Manager and to all Azure Machine Learning.
-1. Azure Machine Learning provides a Machine Learning service token to the user compute target (for example, Azure Machine Learning compute cluster). This token is used by the user compute target to call back into the Machine Learning service after the run is complete. The scope is limited to the workspace.
+1. Azure Machine Learning provides a Machine Learning service token to the user compute target (for example, Azure Machine Learning compute cluster). This token is used by the user compute target to call back into the Machine Learning service after the job is complete. The scope is limited to the workspace.
[![Authentication in Azure Machine Learning](media/concept-enterprise-security/authentication.png)](media/concept-enterprise-security/authentication.png#lightbox)
machine-learning Concept Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-environments.md
Last updated 09/23/2021
# What are Azure Machine Learning environments?
-Azure Machine Learning environments are an encapsulation of the environment where your machine learning training happens. They specify the Python packages, environment variables, and software settings around your training and scoring scripts. They also specify run times (Python, Spark, or Docker). The environments are managed and versioned entities within your Machine Learning workspace that enable reproducible, auditable, and portable machine learning workflows across a variety of compute targets.
+Azure Machine Learning environments are an encapsulation of the environment where your machine learning training happens. They specify the Python packages, environment variables, and software settings around your training and scoring scripts. They also specify runtimes (Python, Spark, or Docker). The environments are managed and versioned entities within your Machine Learning workspace that enable reproducible, auditable, and portable machine learning workflows across a variety of compute targets.
You can use an `Environment` object on your local compute to: * Develop your training script.
You can use an `Environment` object on your local compute to:
* Deploy your model with that same environment. * Revisit the environment in which an existing model was trained.
-The following diagram illustrates how you can use a single `Environment` object in both your run configuration (for training) and your inference and deployment configuration (for web service deployments).
+The following diagram illustrates how you can use a single `Environment` object in both your job configuration (for training) and your inference and deployment configuration (for web service deployments).
![Diagram of an environment in machine learning workflow](./media/concept-environments/ml-environment.png)
-The environment, compute target and training script together form the run configuration: the full specification of a training run.
+The environment, compute target and training script together form the job configuration: the full specification of a training job.
## Types of environments
For code samples, see the "Manage environments" section of [How to use environme
## Environment building, caching, and reuse
-Azure Machine Learning builds environment definitions into Docker images and conda environments. It also caches the environments so they can be reused in subsequent training runs and service endpoint deployments. Running a training script remotely requires the creation of a Docker image, but a local run can use a conda environment directly.
+Azure Machine Learning builds environment definitions into Docker images and conda environments. It also caches the environments so they can be reused in subsequent training jobs and service endpoint deployments. Running a training script remotely requires the creation of a Docker image, but a local job can use a conda environment directly.
-### Submitting a run using an environment
+### Submitting a job using an environment
-When you first submit a remote run using an environment, the Azure Machine Learning service invokes an [ACR Build Task](../container-registry/container-registry-tasks-overview.md) on the Azure Container Registry (ACR) associated with the Workspace. The built Docker image is then cached on the Workspace ACR. Curated environments are backed by Docker images that are cached in Global ACR. At the start of the run execution, the image is retrieved by the compute target from the relevant ACR.
+When you first submit a remote job using an environment, the Azure Machine Learning service invokes an [ACR Build Task](../container-registry/container-registry-tasks-overview.md) on the Azure Container Registry (ACR) associated with the Workspace. The built Docker image is then cached on the Workspace ACR. Curated environments are backed by Docker images that are cached in Global ACR. At the start of the job execution, the image is retrieved by the compute target from the relevant ACR.
-For local runs, a Docker or conda environment is created based on the environment definition. The scripts are then executed on the target compute - a local runtime environment or local Docker engine.
+For local jobs, a Docker or conda environment is created based on the environment definition. The scripts are then executed on the target compute - a local runtime environment or local Docker engine.
### Building environments as Docker images
The second step is omitted if you specify [user-managed dependencies](/python/ap
### Image caching and reuse
-If you use the same environment definition for another run, Azure Machine Learning reuses the cached image from the Workspace ACR to save time.
+If you use the same environment definition for another job, Azure Machine Learning reuses the cached image from the Workspace ACR to save time.
To view the details of a cached image, check the Environments page in Azure Machine Learning studio or use the [`Environment.get_image_details`](/python/api/azureml-core/azureml.core.environment.environment#get-image-details-workspace-) method.
The hash isn't affected by the environment name or version. If you rename your e
> [!NOTE] > You will not be able to submit any local changes to a curated environment without changing the name of the environment. The prefixes "AzureML-" and "Microsoft" are reserved exclusively for curated environments, and your job submission will fail if the name starts with either of them.
-The environment's computed hash value is compared with those in the Workspace and global ACR, or on the compute target (local runs only). If there is a match then the cached image is pulled and used, otherwise an image build is triggered.
+The environment's computed hash value is compared with those in the Workspace and global ACR, or on the compute target (local jobs only). If there is a match then the cached image is pulled and used, otherwise an image build is triggered.
The following diagram shows three environment definitions. Two of them have different names and versions but identical base images and Python packages, which results in the same hash and corresponding cached image. The third environment has different Python packages and versions, leading to a different hash and cached image.
machine-learning Concept Manage Ml Pitfalls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-manage-ml-pitfalls.md
Automated ML also implements explicit **model complexity limitations** to preven
Imbalanced data is commonly found in data for machine learning classification scenarios, and refers to data that contains a disproportionate ratio of observations in each class. This imbalance can lead to a falsely perceived positive effect of a model's accuracy, because the input data has bias towards one class, which results in the trained model to mimic that bias.
-In addition, automated ML runs generate the following charts automatically, which can help you understand the correctness of the classifications of your model, and identify models potentially impacted by imbalanced data.
+In addition, automated ML jobs generate the following charts automatically, which can help you understand the correctness of the classifications of your model, and identify models potentially impacted by imbalanced data.
Chart| Description |
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
Azure Machine Learning only uses MLflow Tracking for metric logging and artifact
> [!NOTE] > Unlike the Azure Machine Learning SDK v1, there is no logging functionality in the SDK v2 (preview), and it is recommended to use MLflow for logging and tracking.
-[MLflow](https://www.mlflow.org) is an open-source library for managing the lifecycle of your machine learning experiments. MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api) is a component of MLflow that logs and tracks your training run metrics and model artifacts, no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine or an Azure Machine Learning compute instance.
+[MLflow](https://www.mlflow.org) is an open-source library for managing the lifecycle of your machine learning experiments. MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api) is a component of MLflow that logs and tracks your training job metrics and model artifacts, no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine or an Azure Machine Learning compute instance.
## Track experiments
You can [Deploy MLflow models to an online endpoint](how-to-deploy-mlflow-models
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
-You can use MLflow's tracking URI and logging API, collectively known as MLflow Tracking, to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning backend support (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud like via an [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
+You can use MLflow's tracking URI and logging API, collectively known as MLflow Tracking, to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning backend support (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your jobs to the cloud like via an [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
Learn more at [Train ML models with MLflow projects and Azure Machine Learning (preview)](how-to-train-mlflow-projects.md).
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
Machine Learning gives you the capability to track the end-to-end audit trail of
- Machine Learning [integrates with Git](how-to-set-up-training-targets.md#gitintegration) to track information on which repository, branch, and commit your code came from. - [Machine Learning datasets](how-to-create-register-datasets.md) help you track, profile, and version data. - [Interpretability](how-to-machine-learning-interpretability.md) allows you to explain your models, meet regulatory compliance, and understand how models arrive at a result for specific input.-- Machine Learning Run history stores a snapshot of the code, data, and computes used to train a model.
+- Machine Learning Job history stores a snapshot of the code, data, and computes used to train a model.
- The Machine Learning Model Registry captures all the metadata associated with your model. For example, metadata includes which experiment trained it, where it's being deployed, and if its deployments are healthy.-- [Integration with Azure](how-to-use-event-grid.md) allows you to act on events in the machine learning lifecycle. Examples are model registration, deployment, data drift, and training (run) events.
+- [Integration with Azure](how-to-use-event-grid.md) allows you to act on events in the machine learning lifecycle. Examples are model registration, deployment, data drift, and training (job) events.
> [!TIP] > While some information on models and datasets is automatically captured, you can add more information by using _tags_. When you look for registered models and datasets in your workspace, you can use tags as a filter.
A theme of the preceding steps is that your retraining should be automated, not
## Automate the machine learning lifecycle
-You can use GitHub and Azure Pipelines to create a continuous integration process that trains a model. In a typical scenario, when a data scientist checks a change into the Git repo for a project, Azure Pipelines starts a training run. The results of the run can then be inspected to see the performance characteristics of the trained model. You can also create a pipeline that deploys the model as a web service.
+You can use GitHub and Azure Pipelines to create a continuous integration process that trains a model. In a typical scenario, when a data scientist checks a change into the Git repo for a project, Azure Pipelines starts a training job. The results of the job can then be inspected to see the performance characteristics of the trained model. You can also create a pipeline that deploys the model as a web service.
The [Machine Learning extension](https://marketplace.visualstudio.com/items?itemName=ms-air-aiagility.vss-services-azureml) makes it easier to work with Azure Pipelines. It provides the following enhancements to Azure Pipelines:
machine-learning Concept Plan Manage Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-plan-manage-cost.md
Use the following tips to help you manage and optimize your compute resource cos
- Configure your training clusters for autoscaling - Set quotas on your subscription and workspaces-- Set termination policies on your training run
+- Set termination policies on your training job
- Use low-priority virtual machines (VM) - Schedule compute instances to shut down and start up automatically - Use an Azure Reserved VM Instance
machine-learning Concept Responsible Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ml.md
Azure Machine LearningΓÇÖs [Responsible AI scorecard](./how-to-responsible-ai-sc
The ML platform also enables decision-making by informing model-driven and data-driven business decisions: -- Data-driven insights to further understand heterogeneous treatment effects on an outcome, using historic data only. For example, ΓÇ£how would a medicine impact a patientΓÇÖs blood pressure?". Such insights are provided through the[Causal Inference](concept-causal-inference.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- Data-driven insights to further understand heterogeneous treatment effects on an outcome, using historic data only. For example, ΓÇ£how would a medicine impact a patientΓÇÖs blood pressure?". Such insights are provided through the [Causal Inference](concept-causal-inference.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
- Model-driven insights, to answer end-usersΓÇÖ questions such as ΓÇ£what can I do to get a different outcome from your AI next time?ΓÇ¥ to inform their actions. Such insights are provided to data scientists through the [Counterfactual What-If](concept-counterfactual-analysis.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md). ## Next steps
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md
Azure Machine Learning provides several ways to train your models, from code-fir
| Training method | Description | | -- | -- |
- | [Run configuration](#run-configuration) | A **typical way to train models** is to use a training script and run configuration. The run configuration provides the information needed to configure the training environment used to train your model. You can specify your training script, compute target, and Azure ML environment in your run configuration and run a training job. |
- | [Automated machine learning](#automated-machine-learning) | Automated machine learning allows you to **train models without extensive data science or programming knowledge**. For people with a data science and programming background, it provides a way to save time and resources by automating algorithm selection and hyperparameter tuning. You don't have to worry about defining a run configuration when using automated machine learning. |
+ | [Run configuration](#run-configuration) | A **typical way to train models** is to use a training script and job configuration. The job configuration provides the information needed to configure the training environment used to train your model. You can specify your training script, compute target, and Azure ML environment in your job configuration and run a training job. |
+ | [Automated machine learning](#automated-machine-learning) | Automated machine learning allows you to **train models without extensive data science or programming knowledge**. For people with a data science and programming background, it provides a way to save time and resources by automating algorithm selection and hyperparameter tuning. You don't have to worry about defining a job configuration when using automated machine learning. |
| [Machine learning pipeline](#machine-learning-pipeline) | Pipelines are not a different training method, but a **way of defining a workflow using modular, reusable steps**, that can include training as part of the workflow. Machine learning pipelines support using automated machine learning and run configuration to train models. Since pipelines are not focused specifically on training, the reasons for using a pipeline are more varied than the other training methods. Generally, you might use a pipeline when:<br>* You want to **schedule unattended processes** such as long running training jobs or data preparation.<br>* Use **multiple steps** that are coordinated across heterogeneous compute resources and storage locations.<br>* Use the pipeline as a **reusable template** for specific scenarios, such as retraining or batch scoring.<br>* **Track and version data sources, inputs, and outputs** for your workflow.<br>* Your workflow is **implemented by different teams that work on specific steps independently**. Steps can then be joined together in a pipeline to implement the workflow. | + **Designer**: Azure Machine Learning designer provides an easy entry-point into machine learning for building proof of concepts, or for users with little coding experience. It allows you to train models using a drag and drop web-based UI. You can use Python code as part of the design, or train models without writing any code.
-+ **Azure CLI**: The machine learning CLI provides commands for common tasks with Azure Machine Learning, and is often used for **scripting and automating tasks**. For example, once you've created a training script or pipeline, you might use the Azure CLI to start a training run on a schedule or when the data files used for training are updated. For training models, it provides commands that submit training jobs. It can submit jobs using run configurations or pipelines.
++ **Azure CLI**: The machine learning CLI provides commands for common tasks with Azure Machine Learning, and is often used for **scripting and automating tasks**. For example, once you've created a training script or pipeline, you might use the Azure CLI to start a training job on a schedule or when the data files used for training are updated. For training models, it provides commands that submit training jobs. It can submit jobs using run configurations or pipelines. Each of these training methods can use different types of compute resources for training. Collectively, these resources are referred to as [__compute targets__](v1/concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine.
machine-learning Concept Train Model Git Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-model-git-integration.md
Title: Git integration for Azure Machine Learning
-description: Learn how Azure Machine Learning integrates with a local Git repository to track repository, branch, and current commit information as part of a training run.
+description: Learn how Azure Machine Learning integrates with a local Git repository to track repository, branch, and current commit information as part of a training job.
SSH displays this fingerprint when it connects to an unknown host to protect you
## Track code that comes from Git repositories
-When you submit a training run from the Python SDK or Machine Learning CLI, the files needed to train the model are uploaded to your workspace. If the `git` command is available on your development environment, the upload process uses it to check if the files are stored in a git repository. If so, then information from your git repository is also uploaded as part of the training run. This information is stored in the following properties for the training run:
+When you submit a training job from the Python SDK or Machine Learning CLI, the files needed to train the model are uploaded to your workspace. If the `git` command is available on your development environment, the upload process uses it to check if the files are stored in a git repository. If so, then information from your git repository is also uploaded as part of the training job. This information is stored in the following properties for the training job:
| Property | Git command used to get the value | Description | | -- | -- | -- | | `azureml.git.repository_uri` | `git ls-remote --get-url` | The URI that your repository was cloned from. | | `mlflow.source.git.repoURL` | `git ls-remote --get-url` | The URI that your repository was cloned from. |
-| `azureml.git.branch` | `git symbolic-ref --short HEAD` | The active branch when the run was submitted. |
-| `mlflow.source.git.branch` | `git symbolic-ref --short HEAD` | The active branch when the run was submitted. |
-| `azureml.git.commit` | `git rev-parse HEAD` | The commit hash of the code that was submitted for the run. |
-| `mlflow.source.git.commit` | `git rev-parse HEAD` | The commit hash of the code that was submitted for the run. |
+| `azureml.git.branch` | `git symbolic-ref --short HEAD` | The active branch when the job was submitted. |
+| `mlflow.source.git.branch` | `git symbolic-ref --short HEAD` | The active branch when the job was submitted. |
+| `azureml.git.commit` | `git rev-parse HEAD` | The commit hash of the code that was submitted for the job. |
+| `mlflow.source.git.commit` | `git rev-parse HEAD` | The commit hash of the code that was submitted for the job. |
| `azureml.git.dirty` | `git status --porcelain .` | `True`, if the branch/commit is dirty; otherwise, `false`. |
-This information is sent for runs that use an estimator, machine learning pipeline, or script run.
+This information is sent for jobs that use an estimator, machine learning pipeline, or script run.
If your training files are not located in a git repository on your development environment, or the `git` command is not available, then no git-related information is tracked.
If your training files are not located in a git repository on your development e
## View the logged information
-The git information is stored in the properties for a training run. You can view this information using the Azure portal or Python SDK.
+The git information is stored in the properties for a training job. You can view this information using the Azure portal or Python SDK.
### Azure portal 1. From the [studio portal](https://ml.azure.com), select your workspace.
-1. Select __Experiments__, and then select one of your experiments.
-1. Select one of the runs from the __RUN NUMBER__ column.
+1. Select __Jobs__, and then select one of your experiments.
+1. Select one of the jobs from the __Display name__ column.
1. Select __Outputs + logs__, and then expand the __logs__ and __azureml__ entries. Select the link that begins with __###\_azure__. The logged information contains text similar to the following JSON:
machine-learning Concept Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-vulnerability-management.md
Next to the regular release cadence, hot fixes are applied in the case vulnerabi
> [!NOTE] > The host OS is not the OS version you might specify for an [environment](how-to-use-environments.md) when training or deploying a model. Environments run inside Docker. Docker runs on the host OS.
-## Microsoft-managed container images
+## Microsoft-managed container images
[Base docker images](https://github.com/Azure/AzureML-Containers) maintained by Azure Machine Learning get security patches frequently to address newly discovered vulnerabilities.
machine-learning Dsvm Tools Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-development.md
keywords: data science tools, data science virtual machine, tools for data scien
--++ Previously updated : 05/12/2021 Last updated : 06/23/2022 # Development tools on the Azure Data Science Virtual Machine
The Data Science Virtual Machine (DSVM) bundles several popular tools in a highl
| Typical uses | Code editor and Git integration | | How to use and run it | Desktop shortcut (`C:\Program Files (x86)\Microsoft VS Code\Code.exe`) in Windows, desktop shortcut or terminal (`code`) in Linux |
-## RStudio Desktop
-
-| Category | Value |
-|--|--|
-| What is it? | Client IDE for R language |
-| Supported DSVM versions | Windows, Linux |
-| Typical uses | R development |
-| How to use and run it | Desktop shortcut (`C:\Program Files\RStudio\bin\rstudio.exe`) on Windows, desktop shortcut (`/usr/bin/rstudio`) on Linux |
- ## PyCharm | Category | Value |
machine-learning Dsvm Tools Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-languages.md
--++ Previously updated : 05/12/2021 Last updated : 06/23/2022
artificial intelligence (AI) applications. Here are some of the notable ones.
Open a command prompt and type `R`.
-* Use in an IDE:
-
- To edit R scripts in an IDE, you can use RStudio, which is installed on the DSVM images by default.
- * Use in Jupyter Lab Open a Launcher tab in Jupyter Lab and select the type and kernel of your new document. If you want your document to be
artificial intelligence (AI) applications. Here are some of the notable ones.
* Install R packages:
- You can install new R packages either by using the `install.packages()` function or by using RStudio.
+ You can install new R packages by using the `install.packages()` function.
## Julia
machine-learning Linux Dsvm Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/linux-dsvm-walkthrough.md
description: Learn how to complete several common data science tasks by using th
--++ Previously updated : 05/10/2021 Last updated : 06/23/2022
To get copies of the code samples that are used in this walkthrough, use git to
git clone https://github.com/Azure/Azure-MachineLearning-DataScience.git ```
-Open a terminal window and start a new R session in the R interactive console. You also can use RStudio, which is preinstalled on the DSVM.
-
+Open a terminal window and start a new R session in the R interactive console.
To import the data and set up the environment: ```R
select top 10 spam, char_freq_dollar from spam;
GO ```
-You can also query by using SQuirreL SQL. Follow steps similar to PostgreSQL by using the SQL Server JDBC driver. The JDBC driver is in the /usr/share/java/jdbcdrivers/sqljdbc42.jar folder.
+You can also query by using SQuirreL SQL. Follow steps similar to PostgreSQL by using the SQL Server JDBC driver. The JDBC driver is in the /usr/share/java/jdbcdrivers/sqljdbc42.jar folder.
machine-learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/overview.md
keywords: data science tools, data science virtual machine, tools for data scien
--++ Previously updated : 04/02/2020 Last updated : 06/23/2022
The key differences between these two product offerings are detailed below:
|Built-in<br>Hosted Notebooks | No<br>(requires additional configuration) | Yes | |Built-in SSO | No <br>(requires additional configuration) | Yes | |Built-in Collaboration | No | Yes |
-|Pre-installed Tools | Jupyter(lab), RStudio Server, VSCode,<br> Visual Studio, PyCharm, Juno,<br>Power BI Desktop, SSMS, <br>Microsoft Office 365, Apache Drill | Jupyter(lab)<br> RStudio Server |
+|Pre-installed Tools | Jupyter(lab), VSCode,<br> Visual Studio, PyCharm, Juno,<br>Power BI Desktop, SSMS, <br>Microsoft Office 365, Apache Drill | Jupyter(lab) |
## Sample use cases
machine-learning Reference Ubuntu Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/reference-ubuntu-vm.md
Title: 'Reference: Ubuntu Data Science Virtual Machine' description: Details on tools included in the Ubuntu Data Science Virtual Machine-+ - Previously updated : 05/12/2021+ Last updated : 06/23/2022
easier to troubleshoot issues, compared to developing on a Spark cluster.
## IDEs and editors
-You have a choice of several code editors, including VS.Code, PyCharm, RStudio, IntelliJ, vi/Vim, Emacs.
+You have a choice of several code editors, including VS.Code, PyCharm, IntelliJ, vi/Vim, Emacs.
-VS.Code, PyCharm, RStudio, and IntelliJ are graphical editors. To use them, you need to be signed in to a graphical
+VS.Code, PyCharm, and IntelliJ are graphical editors. To use them, you need to be signed in to a graphical
desktop. You open them by using desktop and application menu shortcuts. Vim and Emacs are text-based editors. On Emacs, the ESS add-on package makes working with R easier within the Emacs
You can exit Rattle and R. Now you can modify the generated R script. Or, use th
## Next steps
-Have additional questions? Consider creating a [support ticket](https://azure.microsoft.com/support/create-ticket/).
+Have additional questions? Consider creating a [support ticket](https://azure.microsoft.com/support/create-ticket/).
machine-learning Tools Included https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/tools-included.md
keywords: data science tools, data science virtual machine, tools for data scien
--++ Previously updated : 05/12/2021 Last updated : 06/23/2022
The Data Science Virtual Machine comes with the most useful data-science tools p
| [Nano](https://www.nano-editor.org/) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span></br> | <span class='red-x'>&#10060;</span></br> | | | [Visual Studio 2019 Community Edition](https://www.visualstudio.com/community/) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | [Visual Studio on the DSVM](dsvm-tools-development.md#visual-studio-community-edition) | | [Visual Studio Code](https://code.visualstudio.com/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [Visual Studio Code on the DSVM](./dsvm-tools-development.md#visual-studio-code) |
-| [RStudio Desktop](https://www.rstudio.com/products/rstudio/#Desktop) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [RStudio Desktop on the DSVM](./dsvm-tools-development.md#rstudio-desktop) |
-| [RStudio Server](https://www.rstudio.com/products/rstudio/#Server) <br/> (disabled by default) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
| [PyCharm Community Edition](https://www.jetbrains.com/pycharm/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [PyCharm on the DSVM](./dsvm-tools-development.md#pycharm) | | [IntelliJ IDEA](https://www.jetbrains.com/idea/) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | | | [Vim](https://www.vim.org) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
machine-learning Vm Do Ten Things https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/vm-do-ten-things.md
--++ Previously updated : 05/08/2020 Last updated : 06/23/2022
Last updated 05/08/2020
The Windows Data Science Virtual Machine (DSVM) is a powerful data science development environment where you can perform data exploration and modeling tasks. The environment comes already built and bundled with several popular data analytics tools that make it easy to get started with your analysis for on-premises, cloud, or hybrid deployments.
-The DSVM works closely with Azure services. It can read and process data that's already stored on Azure, in Azure Synapse (formerly SQL DW),Azure Data Lake, Azure Storage, or Azure Cosmos DB. It can also take advantage of other analytics tools, such as Azure Machine Learning.
+The DSVM works closely with Azure services. It can read and process data that's already stored on Azure, in Azure Synapse (formerly SQL DW), Azure Data Lake, Azure Storage, or Azure Cosmos DB. It can also take advantage of other analytics tools, such as Azure Machine Learning.
In this article, you'll learn how to use your DSVM to perform data science tasks and interact with other Azure services. Here are some of the things you can do on the DSVM:
When you're in the notebook, you can explore your data, build the model, and tes
You can use languages like R and Python to do your data analytics right on the DSVM.
-For R, you can use an IDE like RStudio that can be found on the start menu or on the desktop. Or you can use R Tools for Visual Studio. Microsoft has provided additional libraries on top of the open-source CRAN R to enable scalable analytics and the ability to analyze data larger than the memory size allowed in parallel chunked analysis.
+For R, you can use R Tools for Visual Studio. Microsoft has provided additional libraries on top of the open-source CRAN R to enable scalable analytics and the ability to analyze data larger than the memory size allowed in parallel chunked analysis.
For Python, you can use an IDE like Visual Studio Community Edition, which has the Python Tools for Visual Studio (PTVS) extension pre-installed. By default, only Python 3.6, the root Conda environment, is configured on PTVS. To enable Anaconda Python 2.7, take the following steps:
machine-learning How To Cicd Data Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-cicd-data-ingestion.md
steps:
artifact: di-notebooks ```
-The pipeline uses [flake8](https://pypi.org/project/flake8/) to do the Python code linting. It runs the unit tests defined in the source code and publishes the linting and test results so they're available in the Azure Pipeline execution screen:
-
-![linting unit tests](media/how-to-cicd-data-ingestion/linting-unit-tests.png)
+The pipeline uses [flake8](https://pypi.org/project/flake8/) to do the Python code linting. It runs the unit tests defined in the source code and publishes the linting and test results so they're available in the Azure Pipeline execution screen.
If the linting and unit testing is successful, the pipeline will copy the source code to the artifact repository to be used by the subsequent deployment steps.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
For each compute instance in a workspace that you created (or that was created f
-[Azure RBAC](../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, and RStudio on that compute instance. A compute instance is dedicated to a single user who has root access, and can terminal in through Jupyter/JupyterLab/RStudio. Compute instance will have single-user sign-in and all actions will use that userΓÇÖs identity for Azure RBAC and attribution of experiment runs. SSH access is controlled through public/private key mechanism.
+[Azure RBAC](../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, and RStudio on that compute instance. A compute instance is dedicated to a single user who has root access, and can terminal in through Jupyter/JupyterLab/RStudio. Compute instance will have single-user sign-in and all actions will use that userΓÇÖs identity for Azure RBAC and attribution of experiment jobs. SSH access is controlled through public/private key mechanism.
These actions can be controlled by Azure RBAC: * *Microsoft.MachineLearningServices/workspaces/computes/read*
To create a compute instance, you'll need permissions for the following actions:
* [Access the compute instance terminal](how-to-access-terminal.md) * [Create and manage files](how-to-manage-files.md) * [Update the compute instance to the latest VM image](concept-vulnerability-management.md#compute-instance)
-* [Submit a training run](how-to-set-up-training-targets.md)
+* [Submit a training job](how-to-set-up-training-targets.md)
machine-learning How To Create Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-workspace-template.md
When using a customer-managed key, Azure Machine Learning creates a secondary re
An additional configuration you can provide for your data is to set the **confidential_data** parameter to **true**. Doing so, does the following: * Starts encrypting the local scratch disk for Azure Machine Learning compute clusters, providing you have not created any previous clusters in your subscription. If you have previously created a cluster in the subscription, open a support ticket to have encryption of the scratch disk enabled for your compute clusters.
-* Cleans up the local scratch disk between runs.
+* Cleans up the local scratch disk between jobs.
* Securely passes credentials for the storage account, container registry, and SSH account from the execution layer to your compute clusters by using key vault. * Enables IP filtering to ensure the underlying batch pools cannot be called by any external services other than AzureMachineLearningService.
To avoid this problem, we recommend one of the following approaches:
After these changes, you can specify the ID of the existing Key Vault resource when running the template. The template will then reuse the Key Vault by setting the `keyVault` property of the workspace to its ID.
- To get the ID of the Key Vault, you can reference the output of the original template run or use the Azure CLI. The following command is an example of using the Azure CLI to get the Key Vault resource ID:
+ To get the ID of the Key Vault, you can reference the output of the original template job or use the Azure CLI. The following command is an example of using the Azure CLI to get the Key Vault resource ID:
```azurecli az keyvault show --name mykeyvault --resource-group myresourcegroup --query id
machine-learning How To Data Ingest Adf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-data-ingest-adf.md
#Customer intent: As an experienced data engineer, I need to create a production data ingestion pipeline for the data used to train my models. + # Data ingestion with Azure Data Factory In this article, you learn about the available options for building a data ingestion pipeline with [Azure Data Factory](../data-factory/introduction.md). This Azure Data Factory pipeline is used to ingest data for use with [Azure Machine Learning](overview-what-is-azure-machine-learning.md). Data Factory allows you to easily extract, transform, and load (ETL) data. Once the data has been transformed and loaded into storage, it can be used to train your machine learning models in Azure Machine Learning.
The function is invoked with the [Azure Data Factory Azure Function activity](..
## Azure Data Factory with Custom Component activity
-In this option, the data is processed with custom Python code wrapped into an executable. It is invoked with an [ Azure Data Factory Custom Component activity](../data-factory/transform-data-using-dotnet-custom-activity.md). This approach is a better fit for large data than the previous technique.
+In this option, the data is processed with custom Python code wrapped into an executable. It is invoked with an [Azure Data Factory Custom Component activity](../data-factory/transform-data-using-dotnet-custom-activity.md). This approach is a better fit for large data than the previous technique.
![Diagram shows an Azure Data Factory pipeline, with a custom component and Run M L Pipeline, and an Azure Machine Learning pipeline, with Train Model, and how they interact with raw data and prepared data.](media/how-to-data-ingest-adf/adf-customcomponent.png)
This method is recommended for [Machine Learning Operations (MLOps) workflows](c
Each time the Data Factory pipeline runs, 1. The data is saved to a different location in storage.
-1. To pass the location to Azure Machine Learning, the Data Factory pipeline calls an [Azure Machine Learning pipeline](concept-ml-pipelines.md). When calling the ML pipeline, the data location and run ID are sent as parameters.
+1. To pass the location to Azure Machine Learning, the Data Factory pipeline calls an [Azure Machine Learning pipeline](concept-ml-pipelines.md). When calling the ML pipeline, the data location and job ID are sent as parameters.
1. The ML pipeline can then create an Azure Machine Learning datastore and dataset with the data location. Learn more in [Execute Azure Machine Learning pipelines in Data Factory](../data-factory/transform-data-machine-learning-service.md). ![Diagram shows an Azure Data Factory pipeline and an Azure Machine Learning pipeline and how they interact with raw data and prepared data. The Data Factory pipeline feeds data to the Prepared Data database, which feeds a data store, which feeds datasets in the Machine Learning workspace.](media/how-to-data-ingest-adf/aml-dataset.png)
Each time the Data Factory pipeline runs,
Once the data is accessible through a datastore or dataset, you can use it to train an ML model. The training process might be part of the same ML pipeline that is called from ADF. Or it might be a separate process such as experimentation in a Jupyter notebook.
-Since datasets support versioning, and each run from the pipeline creates a new version, it's easy to understand which version of the data was used to train a model.
+Since datasets support versioning, and each job from the pipeline creates a new version, it's easy to understand which version of the data was used to train a model.
### Read data directly from storage
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
Once your environment is set up, use the VS Code debugger to test and debug your
- To debug startup behavior, place your breakpoint(s) inside the `init` function. - To debug scoring behavior, place your breakpoint(s) inside the `run` function.
-1. Select the VS Code Run view.
+1. Select the VS Code Job view.
1. In the Run and Debug dropdown, select **Azure ML: Debug Local Endpoint** to start debugging your endpoint locally. In the **Breakpoints** section of the Run view, check that:
machine-learning How To Debug Parallel Run Step https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-parallel-run-step.md
Last updated 10/21/2021
#Customer intent: As a data scientist, I want to figure out why my ParallelRunStep doesn't run so that I can fix it. + # Troubleshooting the ParallelRunStep [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-and-where.md
adobe-target: true + # Deploy machine learning models to Azure [!INCLUDE [sdk & cli v1](../../includes/machine-learning-dev-v1.md)]
Set `-p` to the path of a folder or a file that you want to register.
For more information on `az ml model register`, see the [reference documentation](/cli/azure/ml(v1)/model).
-### Register a model from an Azure ML training run
+### Register a model from an Azure ML training job
If you need to register a model that was created previously through an Azure Machine Learning training job, you can specify the experiment, run, and path to the model:
To include multiple files in the model registration, set `model_path` to the pat
For more information, see the documentation for the [Model class](/python/api/azureml-core/azureml.core.model.model).
-### Register a model from an Azure ML training run
+### Register a model from an Azure ML training job
When you use the SDK to train a model, you can receive either a [Run](/python/api/azureml-core/azureml.core.run.run) object or an [AutoMLRun](/python/api/azureml-train-automl-client/azureml.train.automl.run.automlrun) object, depending on how you trained the model. Each object can be used to register a model created by an experiment run.
machine-learning How To Deploy Fpga Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-fpga-web-service.md
Next, create a Docker image from the converted model and all dependencies. This
#### Deploy to a local edge server
-All [Azure Data Box Edge devices](../databox-online/azure-stack-edge-overview.md
-) contain an FPGA for running the model. Only one model can be running on the FPGA at one time. To run a different model, just deploy a new container. Instructions and sample code can be found in [this Azure Sample](https://github.com/Azure-Samples/aml-hardware-accelerated-models).
+All [Azure Data Box Edge devices](../databox-online/azure-stack-edge-overview.md) contain an FPGA for running the model. Only one model can be running on the FPGA at one time. To run a different model, just deploy a new container. Instructions and sample code can be found in [this Azure Sample](https://github.com/Azure-Samples/aml-hardware-accelerated-models).
### Consume the deployed model
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
+
+ Title: Deploy MLflow models
+
+description: Learn to deploy your MLflow model to the deployment targets supported by Azure.
+++++ Last updated : 06/06/2022++
+ms.devlang: azurecli
++
+# Deploy MLflow models
+++
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](./v1/how-to-deploy-mlflow-models.md)
+> * [v2 (current version)](how-to-deploy-mlflow-models-online-endpoints.md)
+
+In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to Azure ML for both real-time and batch inference. Azure ML supports no-code deployment of models created and logged with MLflow. This means that you don't have to provide a scoring script or an environment. Those models can be deployed to ACI (Azure Container Instances), AKS (Azure Kubernetes Services) or our managed inference services (referred as MIR).
+
+For no-code-deployment, Azure Machine Learning
+
+* Dynamically installs Python packages provided in the `conda.yaml` file, this means the dependencies are installed during container runtime.
+ * The base container image/curated environment used for dynamic installation is `mcr.microsoft.com/azureml/mlflow-ubuntu18.04-py37-cpu-inference` or `AzureML-mlflow-ubuntu18.04-py37-cpu-inference`
+* Provides a MLflow base image/curated environment that contains the following items:
+ * [`azureml-inference-server-http`](how-to-inference-server-http.md)
+ * [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst)
+ * `pandas`
+ * The scoring script baked into the image.
+
+## Supported targets for MLflow models
+
+The following table shows the target support for MLflow models in Azure ML:
++
+| Scenario | Azure Container Instance | Azure Kubernetes | Managed Inference |
+| :- | :-: | :-: | :-: |
+| Deploying models logged with MLflow to real time inference | **&check;**<sup>1</sup> | **&check;**<sup>1</sup> | **&check;**<sup>1</sup> |
+| Deploying models logged with MLflow to batch inference | <sup>2</sup> | <sup>2</sup> | **&check;** |
+| Deploying models with ColSpec signatures | **&check;**<sup>4</sup> | **&check;**<sup>4</sup> | **&check;**<sup>4</sup> |
+| Deploying models with TensorSpec signatures | **&check;**<sup>5</sup> | **&check;**<sup>5</sup> | **&check;**<sup>5</sup> |
+| Run models logged with MLflow in you local compute with Azure ML CLI v2 | **&check;** | **&check;** | <sup>3</sup> |
+| Debug online endpoints locally in Visual Studio Code (preview) | | | |
+
+> [!NOTE]
+> - <sup>1</sup> Spark flavor is not supported at the moment for deployment.
+> - <sup>2</sup> We suggest you to use Azure Machine Learning Pipelines with Parallel Run Step.
+> - <sup>3</sup> For deploying MLflow models locally, use the MLflow CLI command `mlflow models serve -m <MODEL_NAME>`. Configure the environment variable `MLFLOW_TRACKING_URI` with the URL of your tracking server.
+> - <sup>4</sup> Data type `mlflow.types.DataType.Binary` is not supported as column type. For models that work with images, we suggest you to use or (a) tensors inputs using the [TensorSpec input type](https://mlflow.org/docs/latest/python_api/mlflow.types.html#mlflow.types.TensorSpec), or (b) `Base64` encoding schemes with a `mlflow.types.DataType.String` column type, which is commonly used when there is a need to encode binary data that needs be stored and transferred over media.
+> - <sup>5</sup> Tensors with unspecified shapes (`-1`) is only supported at the batch size by the moment. For instance, a signature with shape `(-1, -1, -1, 3)` is not supported but `(-1, 300, 300, 3)` is.
+
+For more information about how to specify requests to online-endpoints or the supported file types in batch-endpoints, check [Considerations when deploying to real time inference](#considerations-when-deploying-to-real-time-inference) and [Considerations when deploying to batch inference](#considerations-when-deploying-to-batch-inference).
+
+## Deployment tools
+
+There are three workflows for deploying MLflow models to Azure ML:
+
+- [Deploy using the MLflow plugin](#deploy-using-the-mlflow-plugin)
+- [Deploy using CLI (v2)](#deploy-using-cli-v2)
+- [Deploy using Azure Machine Learning studio](#deploy-using-azure-machine-learning-studio)
+
+### Which option to use?
+
+If you are familiar with MLflow or your platform support MLflow natively (like Azure Databricks) and you wish to continue using the same set of methods, use the `azureml-mlflow` plugin. On the other hand, if you are more familiar with the [Azure ML CLI v2](concept-v2.md), you want to automate deployments using automation pipelines, or you want to keep deployments configuration in a git repository; we recommend you to use the [Azure ML CLI v2](concept-v2.md). If you want to quickly deploy and test models trained with MLflow, you can use [Azure Machine Learning studio](https://ml.azure.com) UI deployment.
+
+## Deploy using the MLflow plugin
+
+The MLflow plugin [azureml-mlflow](https://pypi.org/project/azureml-mlflow/) can deploy models to Azure ML, either to Azure Kubernetes Service (AKS), Azure Container Instances (ACI) and Managed Inference Service (MIR) for real-time serving.
+
+> [!WARNING]
+> Deploying to Managed Inference Service - Batch endpoints is not supported in the MLflow plugin at the moment.
+
+### Prerequisites
+
+* Install the `azureml-mlflow` package.
+* If you are running outside an Azure ML compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. See [MLflow Tracking URI to connect with Azure Machine Learning)[https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow] for more details.
+
+### Deploying models to ACI or AKS
+
+Deployments can be generated using both the Python API for MLflow or MLflow CLI. In both cases, a JSON configuration file can be indicated with the details of the deployment you want to achieve. If not indicated, then a default deployment is done using Azure Container Instances (ACI) and a minimal configuration. The full specification of this configuration for ACI and AKS file can be checked at [Deployment configuration schema](v1/reference-azure-machine-learning-cli.md#deployment-configuration-schema).
+
+#### Configuration example for ACI deployment
+
+```json
+{
+ "computeType": "aci",
+ "containerResourceRequirements":
+ {
+ "cpu": 1,
+ "memoryInGB": 1
+ },
+ "location": "eastus2",
+}
+```
+
+> [!NOTE]
+> - If `containerResourceRequirements` is not indicated, a deployment with minimal compute configuration is applied (cpu: 0.1 and memory: 0.5).
+> - If `location` is not indicated, it defaults to the location of the workspace.
+
+#### Configuration example for an AKS deployment
+
+```json
+{
+ "computeType": "aks",
+ "computeTargetName": "aks-mlflow"
+}
+```
+
+> [!NOTE]
+> - In above exmaple, `aks-mlflow` is the name of an Azure Kubernetes Cluster registered/created in Azure Machine Learning.
+
+#### Running the deployment
+
+The following sample creates a deployment using an ACI:
+
+ ```python
+ import json
+ from mlflow.deployments import get_deploy_client
+
+ # Create the deployment configuration.
+ # If no deployment configuration is provided, then the deployment happens on ACI.
+ deploy_config = {"computeType": "aci"}
+
+ # Write the deployment configuration into a file.
+ deployment_config_path = "deployment_config.json"
+ with open(deployment_config_path, "w") as outfile:
+ outfile.write(json.dumps(deploy_config))
+
+ # Set the tracking uri in the deployment client.
+ client = get_deploy_client("<azureml-mlflow-tracking-url>")
+
+ # MLflow requires the deployment configuration to be passed as a dictionary.
+ config = {"deploy-config-file": deployment_config_path}
+ model_name = "mymodel"
+ model_version = 1
+
+ # define the model path and the name is the service name
+ # if model is not registered, it gets registered automatically and a name is autogenerated using the "name" parameter below
+ client.create_deployment(
+ model_uri=f"models:/{model_name}/{model_version}",
+ config=config,
+ name="mymodel-aci-deployment",
+ )
+ ```
+
+### Deploying models to Managed Inference
+
+Deployments can be generated using both the Python API for MLflow or MLflow CLI. In both cases, a JSON configuration file needs to be indicated with the details of the deployment you want to achieve. The full specification of this configuration can be found at [Managed online deployment schema (v2)](reference-yaml-deployment-managed-online.md).
+
+#### Configuration example for a Managed Inference Service deployment (real time)
+
+```json
+{
+ "instance_type": "Standard_DS2_v2",
+ "instance_count": 1,
+}
+```
+
+#### Running the deployment
+
+The following sample deploys a model to a real time Managed Inference Endpoint:
+
+ ```python
+ import json
+ from mlflow.deployments import get_deploy_client
+
+ # Create the deployment configuration.
+ deploy_config = {
+ "instance_type": "Standard_DS2_v2",
+ "instance_count": 1,
+ }
+
+ # Write the deployment configuration into a file.
+ deployment_config_path = "deployment_config.json"
+ with open(deployment_config_path, "w") as outfile:
+ outfile.write(json.dumps(deploy_config))
+
+ # Set the tracking uri in the deployment client.
+ client = get_deploy_client("<azureml-mlflow-tracking-url>")
+
+ # MLflow requires the deployment configuration to be passed as a dictionary.
+ config = {"deploy-config-file": deployment_config_path}
+ model_name = "mymodel"
+ model_version = 1
+
+ # define the model path and the name is the service name
+ # if model is not registered, it gets registered automatically and a name is autogenerated using the "name" parameter below
+ client.create_deployment(
+ model_uri=f"models:/{model_name}/{model_version}",
+ config=config,
+ name="mymodel-mir-deployment",
+ )
+ ```
+
+## Deploy using CLI (v2)
+
+You can use Azure ML CLI v2 to deploy models trained and logged with MLflow to Managed Inference. When you deploy your MLflow model using the Azure ML CLI v2, it's a no-code-deployment so you don't have to provide a scoring script or an environment, but you can if needed.
+
+### Prerequisites
++
+* You must have a MLflow model. The examples in this article are based on the models from [https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/).
+
+ * If you don't have an MLflow formatted model, you can [convert your custom ML model to MLflow format](how-to-convert-custom-model-to-mlflow.md).
++
+In this code snippet used in this article, the `ENDPOINT_NAME` environment variable contains the name of the endpoint to create and use. To set this, use the following command from the CLI. Replace `<YOUR_ENDPOINT_NAME>` with the name of your endpoint:
++
+### Steps
++
+This example shows how you can deploy an MLflow model to an online endpoint using CLI (v2).
+
+> [!IMPORTANT]
+> For MLflow no-code-deployment, **[testing via local endpoints](how-to-deploy-managed-online-endpoints.md#deploy-and-debug-locally-by-using-local-endpoints)** is currently not supported.
+
+1. Create a YAML configuration file for your endpoint. The following example configures the name and authentication mode of the endpoint:
+
+ __create-endpoint.yaml__
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/create-endpoint.yaml":::
+
+1. To create a new endpoint using the YAML configuration, use the following command:
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_endpoint":::
+
+1. Create a YAML configuration file for the deployment. The following example configures a deployment of the `sklearn-diabetes` model to the endpoint created in the previous step:
+
+ > [!IMPORTANT]
+ > For MLflow no-code-deployment (NCD) to work, setting **`type`** to **`mlflow_model`** is required, `type: mlflow_modelΓÇï`. For more information, see [CLI (v2) model YAML schema](reference-yaml-model.md).
+
+ __sklearn-deployment.yaml__
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/sklearn-deployment.yaml":::
+
+1. To create the deployment using the YAML configuration, use the following command:
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_sklearn_deployment":::
+
+## Deploy using Azure Machine Learning studio
+
+This example shows how you can deploy an MLflow model to an online endpoint using [Azure Machine Learning studio](https://ml.azure.com).
+
+1. From [studio](https://ml.azure.com), select your workspace and then use the __models__ page to create a new model in the registry. You can use the option *From local files* to select the MLflow model from [https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/mlflow/sklearn-diabetes/model](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/mlflow/sklearn-diabetes/model):
+
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/register-model-ui.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/register-model-ui.png" alt-text="Screenshot showing create option on the Models UI page.":::
+
+2. From [studio](https://ml.azure.com), select your workspace and then use either the __endpoints__ or __models__ page to create the endpoint deployment:
+
+ # [Endpoints page](#tab/endpoint)
+
+ 1. From the __Endpoints__ page, Select **+Create**.
+
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png" alt-text="Screenshot showing create option on the Endpoints UI page.":::
+
+ 1. Provide a name and authentication type for the endpoint, and then select __Next__.
+ 1. When selecting a model, select the MLflow model registered previously. Select __Next__ to continue.
+
+ 1. When you select a model registered in MLflow format, in the Environment step of the wizard, you don't need a scoring script or an environment.
+
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/ncd-wizard.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/ncd-wizard.png" alt-text="Screenshot showing no code and environment needed for MLflow models.":::
+
+ 1. Complete the wizard to deploy the model to the endpoint.
+
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/review-screen-ncd.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/review-screen-ncd.png" alt-text="Screenshot showing NCD review screen.":::
+
+ # [Models page](#tab/models)
+
+ 1. Select the MLflow model, and then select __Deploy__. When prompted, select __Deploy to real-time endpoint__.
+
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/deploy-from-models-ui.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/deploy-from-models-ui.png" alt-text="Screenshot showing how to deploy model from Models UI.":::
+
+ 1. Complete the wizard to deploy the model to the endpoint.
+
+
+
+### Deploy models after a training job
+
+This section helps you understand how to deploy models to an online endpoint once you have completed your [training job](how-to-train-cli.md).
+
+1. Download the outputs from the training job. The outputs contain the model folder.
+
+ > [!NOTE]
+ > If you have used `mlflow.autolog()` in your training script, you will see model artifacts in the job's run history. Azure Machine Learning integrates with MLflow's tracking functionality. You can use `mlflow.autolog()` for several common ML frameworks to log model parameters, performance metrics, model artifacts, and even feature importance graphs.
+ >
+ > For more information, see [Train models with CLI](how-to-train-cli.md#model-tracking-with-mlflow). Also see the [training job samples](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step) in the GitHub repository.
+
+ # [Azure Machine Learning studio](#tab/studio)
+
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/download-output-logs.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/download-output-logs.png" alt-text="Screenshot showing how to download Outputs and logs from Experimentation run.":::
+
+ # [CLI](#tab/cli)
+
+ ```azurecli
+ az ml job download -n $run_id --outputs
+ ```
+
+2. To deploy using the downloaded files, you can use either studio or the Azure command-line interface. Use the model folder from the outputs for deployment:
+
+ * [Deploy using Azure Machine Learning studio](how-to-deploy-mlflow-models-online-endpoints.md#deploy-using-azure-machine-learning-studio).
+ * [Deploy using Azure Machine Learning CLI (v2)](how-to-deploy-mlflow-models-online-endpoints.md#deploy-using-cli-v2).
+
+## Considerations when deploying to real time inference
+
+The following input's types are supported in Azure ML when deploying models with no-code deployment. Take a look at *Notes* in the bottom of the table for additional considerations.
+
+| Input type | Support in MLflow models (serve) | Support in Azure ML|
+| :- | :-: | :-: |
+| JSON-serialized pandas DataFrames in the split orientation | **&check;** | **&check;** |
+| JSON-serialized pandas DataFrames in the records orientation | **&check;** | <sup>1</sup> |
+| CSV-serialized pandas DataFrames | **&check;** | <sup>2</sup> |
+| Tensor input format as JSON-serialized lists (tensors) and dictionary of lists (named tensors) | | **&check;** |
+| Tensor input formatted as in TF ServingΓÇÖs API | **&check;** | |
+
+> [!NOTE]
+> - <sup>1</sup> We suggest you to use split orientation instead. Records orientation doesn't guarante column ordering preservation.
+> - <sup>2</sup> We suggest you to explore batch inference for processing files.
+
+Regardless of the input type used, Azure Machine Learning requires inputs to be provided in a JSON payload, within a dictionary key `input_data`. Note that such key is not required when serving models using the command `mlflow models serve` and hence payloads can't be used interchangeably.
+
+### Creating requests
+
+Your inputs should be submitted inside a JSON payload containing a dictionary with key `input_data`.
+
+#### Payload example for a JSON-serialized pandas DataFrames in the split orientation
+
+```json
+{
+ "input_data": {
+ "columns": [
+ "age", "sex", "trestbps", "chol", "fbs", "restecg", "thalach", "exang", "oldpeak", "slope", "ca", "thal"
+ ],
+ "index": [1],
+ "data": [
+ [1, 1, 145, 233, 1, 2, 150, 0, 2.3, 3, 0, 2]
+ ]
+ }
+}
+```
+
+#### Payload example for a tensor input
+
+```json
+{
+ "input_data": [
+ [1, 1, 0, 233, 1, 2, 150, 0, 2.3, 3, 0, 2],
+ [1, 1, 0, 233, 1, 2, 150, 0, 2.3, 3, 0, 2]
+ [1, 1, 0, 233, 1, 2, 150, 0, 2.3, 3, 0, 2],
+ [1, 1, 145, 233, 1, 2, 150, 0, 2.3, 3, 0, 2]
+ ]
+}
+```
+
+#### Payload example for a named-tensor input
+
+```json
+{
+ "input_data": {
+ "tokens": [
+ [0, 655, 85, 5, 23, 84, 23, 52, 856, 5, 23, 1]
+ ],
+ "mask": [
+ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0]
+ ]
+ }
+}
+```
+
+## Considerations when deploying to batch inference
+
+Azure ML supports no-code deployment for batch inference in Managed Inference service. This represents a convenient way to deploy models that require processing of big amounts of data in a batch-fashion.
+
+### How work is distributed on workers
+
+Work is distributed at the file level, for both structured and unstructured data. As a consequence, only [file datasets](v1/how-to-create-register-datasets.md#filedataset) or [URI folders](reference-yaml-data.md) are supported for this feature. Each worker processes batches of `Mini batch size` files at a time. Further parallelism can be achieved if `Max concurrency per instance` is increased.
+
+> [!WARNING]
+> Nested folder structures are not explored during inference. If you are partitioning your data using folders, make sure to flatten the structure beforehand.
+
+### File's types support
+
+The following data types are supported for batch inference.
+
+| File extension | Type returned as model's input | Signature requirement |
+| :- | :- | :- |
+| `.csv` | `pd.DataFrame` | `ColSpec`. If not provided, columns typing is not enforced. |
+| `.png`, `.jpg`, `.jpeg`, `.tiff`, `.bmp`, `.gif` | `np.ndarray` | `TensorSpec`. Input is reshaped to match tensors shape if available. If no signature is available, tensors of type `np.uint8` are inferred. |
++
+## Next steps
+
+To learn more, review these articles:
+
+- [Deploy models with REST (preview)](how-to-deploy-with-rest.md)
+- [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
+- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md)
+- [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md)
+- [View costs for an Azure Machine Learning managed online endpoint (preview)](how-to-view-online-endpoints-costs.md)
+- [Access Azure resources with an online endpoint and managed identity (preview)](how-to-access-resources-from-endpoints-managed-identities.md)
+- [Troubleshoot online endpoint deployment](how-to-troubleshoot-managed-online-endpoints.md)
machine-learning How To Deploy Model Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-cognitive-search.md
Last updated 03/11/2021
+ # Deploy a model for use with Cognitive Search [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
When deploying a model for use with Azure Cognitive Search, the deployment must
## Connect to your workspace
-An Azure Machine Learning workspace provides a centralized place to work with all the artifacts you create when you use Azure Machine Learning. The workspace keeps a history of all training runs, including logs, metrics, output, and a snapshot of your scripts.
+An Azure Machine Learning workspace provides a centralized place to work with all the artifacts you create when you use Azure Machine Learning. The workspace keeps a history of all training jobs, including logs, metrics, output, and a snapshot of your scripts.
To connect to an existing workspace, use the following code:
machine-learning How To Designer Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-designer-transform-data.md
Now that your pipeline is set up to split the data, you need to specify where to
![Screenshot showing how to configure the Export Data components](media/how-to-designer-transform-data/us-income-export-data.png).
-### Submit the run
+### Submit the job
-Now that your pipeline is setup to split and export the data, submit a pipeline run.
+Now that your pipeline is setup to split and export the data, submit a pipeline job.
1. At the top of the canvas, select **Submit**.
-1. In the **Set up pipeline run** dialog, select **Create new** to create an experiment.
+1. In the **Set up pipeline job** dialog, select **Create new** to create an experiment.
- Experiments logically group together related pipeline runs. If you run this pipeline in the future, you should use the same experiment for logging and tracking purposes.
+ Experiments logically group together related pipeline jobs. If you run this pipeline in the future, you should use the same experiment for logging and tracking purposes.
1. Provide a descriptive experiment name like "split-census-data".
machine-learning How To Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-export-delete-data.md
Last updated 10/21/2021
++ # Export or delete your Machine Learning service workspace data In Azure Machine Learning, you can export or delete your workspace data using either the portal's graphical interface or the Python SDK. This article describes both options.
In Azure Machine Learning, you can export or delete your workspace data using ei
In-product data stored by Azure Machine Learning is available for export and deletion. You can export and delete using Azure Machine Learning studio, CLI, and SDK. Telemetry data can be accessed through the Azure Privacy portal.
-In Azure Machine Learning, personal data consists of user information in run history documents.
+In Azure Machine Learning, personal data consists of user information in job history documents.
## Delete high-level resources using the portal
These resources can be deleted by selecting them from the list and choosing **De
:::image type="content" source="media/how-to-export-delete-data/delete-resource-group-resources.png" alt-text="Screenshot of portal, with delete icon highlighted":::
-Run history documents, which may contain personal user information, are stored in the storage account in blob storage, in subfolders of `/azureml`. You can download and delete the data from the portal.
+Job history documents, which may contain personal user information, are stored in the storage account in blob storage, in subfolders of `/azureml`. You can download and delete the data from the portal.
:::image type="content" source="media/how-to-export-delete-data/storage-account-folders.png" alt-text="Screenshot of azureml directory in storage account, within the portal":::
Run history documents, which may contain personal user information, are stored i
Azure Machine Learning studio provides a unified view of your machine learning resources, such as notebooks, datasets, models, and experiments. Azure Machine Learning studio emphasizes preserving a record of your data and experiments. Computational resources such as pipelines and compute resources can be deleted using the browser. For these resources, navigate to the resource in question and choose **Delete**.
-Datasets can be unregistered and Experiments can be archived, but these operations don't delete the data. To entirely remove the data, datasets and experiment data must be deleted at the storage level. Deleting at the storage level is done using the portal, as described previously. An individual Run can be deleted directly in studio. Deleting a Run deletes the Run's data.
+Datasets can be unregistered and Experiments can be archived, but these operations don't delete the data. To entirely remove the data, datasets and experiment data must be deleted at the storage level. Deleting at the storage level is done using the portal, as described previously. An individual Job can be deleted directly in studio. Deleting a Job deletes the Job's data.
> [!NOTE] > Prior to unregistering a Dataset, use its **Data source** link to find the specific Data URL to delete.
-You can download training artifacts from experimental runs using the Studio. Choose the **Experiment** and **Run** in which you're interested. Choose **Output + logs** and navigate to the specific artifacts you wish to download. Choose **...** and **Download**.
+You can download training artifacts from experimental jobs using the Studio. Choose the **Experiment** and **Job** in which you're interested. Choose **Output + logs** and navigate to the specific artifacts you wish to download. Choose **...** and **Download**.
You can download a registered model by navigating to the **Model** and choosing **Download**.
You can download a registered model by navigating to the **Model** and choosing
## Export and delete resources using the Python SDK
-You can download the outputs of a particular run using:
+You can download the outputs of a particular job using:
```python # Retrieved from Azure Machine Learning web UI
machine-learning How To Github Actions Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-github-actions-machine-learning.md
The file has four sections:
||| |**Authentication** | 1. Define a service principal. <br /> 2. Create a GitHub secret. | |**Connect** | 1. Connect to the machine learning workspace. <br /> 2. Connect to a compute target. |
-|**Run** | 1. Submit a training run. |
+|**Job** | 1. Submit a training job. |
|**Deploy** | 1. Register model in Azure Machine Learning registry. 1. Deploy the model. | ## Create repository
Use the [Azure Machine Learning Compute action](https://github.com/Azure/aml-com
with: azure_credentials: ${{ secrets.AZURE_CREDENTIALS }} ```
-## Submit training run
+## Submit training job
Use the [Azure Machine Learning Training action](https://github.com/Azure/aml-run) to submit a ScriptRun, an Estimator or a Pipeline to Azure Machine Learning.
machine-learning How To High Availability Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-high-availability-machine-learning.md
By keeping your data storage isolated from the default storage the workspace use
### Manage machine learning artifacts as code
-Runs in Azure Machine Learning are defined by a run specification. This specification includes dependencies on input artifacts that are managed on a workspace-instance level, including environments, datasets, and compute. For multi-region run submission and deployments, we recommend the following practices:
+Jobs in Azure Machine Learning are defined by a job specification. This specification includes dependencies on input artifacts that are managed on a workspace-instance level, including environments, datasets, and compute. For multi-region job submission and deployments, we recommend the following practices:
* Manage your code base locally, backed by a Git repository. * Export important notebooks from Azure Machine Learning studio.
Runs in Azure Machine Learning are defined by a run specification. This specific
* Manage configurations as code. * Avoid hardcoded references to the workspace. Instead, configure a reference to the workspace instance using a [config file](how-to-configure-environment.md#workspace) and use [Workspace.from_config()](/python/api/azureml-core/azureml.core.workspace.workspace#remarks) to initialize the workspace. To automate the process, use the [Azure CLI extension for machine learning](v1/reference-azure-machine-learning-cli.md) command [az ml folder attach](/cli/azure/ml(v1)/folder#az-ml(v1)-folder-attach).
- * Use run submission helpers such as [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) and [Pipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline(class)).
+ * Use job submission helpers such as [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) and [Pipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline(class)).
* Use [Environments.save_to_directory()](/python/api/azureml-core/azureml.core.environment(class)#save-to-directory-path--overwrite-false-) to save your environment definitions. * Use a Dockerfile if you use custom Docker images. * Use the [Dataset](/python/api/azureml-core/azureml.core.dataset(class)) class to define the collection of data [paths](/python/api/azureml-core/azureml.data.datapath) used by your solution.
Runs in Azure Machine Learning are defined by a run specification. This specific
### Continue work in the failover workspace
-When your primary workspace becomes unavailable, you can switch over the secondary workspace to continue experimentation and development. Azure Machine Learning does not automatically submit runs to the secondary workspace if there is an outage. Update your code configuration to point to the new workspace resource. We recommend to avoiding hardcoding workspace references. Instead, use a [workspace config file](how-to-configure-environment.md#workspace) to minimize manual user steps when changing workspaces. Make sure to also update any automation, such as continuous integration and deployment pipelines to the new workspace.
+When your primary workspace becomes unavailable, you can switch over the secondary workspace to continue experimentation and development. Azure Machine Learning does not automatically submit jobs to the secondary workspace if there is an outage. Update your code configuration to point to the new workspace resource. We recommend to avoiding hardcoding workspace references. Instead, use a [workspace config file](how-to-configure-environment.md#workspace) to minimize manual user steps when changing workspaces. Make sure to also update any automation, such as continuous integration and deployment pipelines to the new workspace.
-Azure Machine Learning cannot sync or recover artifacts or metadata between workspace instances. Dependent on your application deployment strategy, you might have to move artifacts or recreate experimentation inputs such as dataset objects in the failover workspace in order to continue run submission. In case you have configured your primary workspace and secondary workspace resources to share associated resources with geo-replication enabled, some objects might be directly available to the failover workspace. For example, if both workspaces share the same docker images, configured datastores, and Azure Key Vault resources. The following diagram shows a configuration where two workspaces share the same images (1), datastores (2), and Key Vault (3).
+Azure Machine Learning cannot sync or recover artifacts or metadata between workspace instances. Dependent on your application deployment strategy, you might have to move artifacts or recreate experimentation inputs such as dataset objects in the failover workspace in order to continue job submission. In case you have configured your primary workspace and secondary workspace resources to share associated resources with geo-replication enabled, some objects might be directly available to the failover workspace. For example, if both workspaces share the same docker images, configured datastores, and Azure Key Vault resources. The following diagram shows a configuration where two workspaces share the same images (1), datastores (2), and Key Vault (3).
![Reference resource configuration](./media/how-to-high-availability-machine-learning/bcdr-resource-configuration.png)
The following artifacts can be exported and imported between workspaces by using
> [!TIP] > * __Registered datasets__ cannot be downloaded or moved. This includes datasets generated by Azure ML, such as intermediate pipeline datasets. However datasets that refer to a shared file location that both workspaces can access, or where the underlying data storage is replicated, can be registered on both workspaces. Use the [az ml dataset register](/cli/azure/ml(v1)/dataset#ml-az-ml-dataset-register) to register a dataset.
-> * __Run outputs__ are stored in the default storage account associated with a workspace. While run outputs might become inaccessible from the studio UI in the case of a service outage, you can directly access the data through the storage account. For more information on working with data stored in blobs, see [Create, download, and list blobs with Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md).
+> * __Job outputs__ are stored in the default storage account associated with a workspace. While job outputs might become inaccessible from the studio UI in the case of a service outage, you can directly access the data through the storage account. For more information on working with data stored in blobs, see [Create, download, and list blobs with Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md).
## Recovery options
machine-learning How To Log Pipelines Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-pipelines-application-insights.md
Last updated 10/21/2021
+ # Collect machine learning pipeline log files in Application Insights for alerts and debugging [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
Logs can help you diagnose errors and warnings, or track performance metrics lik
> [!TIP]
-> This article shows you how to monitor the model training process. If you're interested in monitoring resource usage and events from Azure Machine learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
+> This article shows you how to monitor the model training process. If you're interested in monitoring resource usage and events from Azure Machine learning, such as quotas, completed training jobs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
## Prerequisites
The following table describes how to log specific value types:
|Log numpy metrics or PIL image objects|`mlflow.log_image(img, 'figure.png')`|| |Log matlotlib plot or image file|` mlflow.log_figure(fig, "figure.png")`||
-## Log a training run with MLflow
+## Log a training job with MLflow
To set up for logging with MLflow, import `mlflow` and set the tracking URI:
ws = Workspace.from_config()
mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri()) ```
-### Interactive runs
+### Interactive jobs
When training interactively, such as in a Jupyter Notebook, use the following pattern: 1. Create or set the active experiment.
-1. Start the run.
+1. Start the job.
1. Use logging methods to log metrics and other information.
-1. End the run.
+1. End the job.
-For example, the following code snippet demonstrates setting the tracking URI, creating an experiment, and then logging during a run
+For example, the following code snippet demonstrates setting the tracking URI, creating an experiment, and then logging during a job
```python from mlflow.tracking import MlflowClient
For remote training runs, the tracking URI and experiment are set automatically.
To save the model from a training run, use the `log_model()` API for the framework you're working with. For example, [mlflow.sklearn.log_model()](https://mlflow.org/docs/latest/python_api/mlflow.sklearn.html#mlflow.sklearn.log_model). For frameworks that MLflow doesn't support, see [Convert custom models to MLflow](how-to-convert-custom-model-to-mlflow.md).
-## View run information
+## View job information
You can view the logged information using MLflow through the [MLflow.entities.Run](https://mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.Run) object. After a training job completes, you can retrieve it using the [MlFlowClient()](https://mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient):
params = finished_mlflow_run.data.params
<a name="view-the-experiment-in-the-web-portal"></a>
-## View run metrics in the studio UI
+## View job metrics in the studio UI
-You can browse completed run records, including logged metrics, in the [Azure Machine Learning studio](https://ml.azure.com).
+You can browse completed job records, including logged metrics, in the [Azure Machine Learning studio](https://ml.azure.com).
-Navigate to the **Experiments** tab. To view all your runs in your Workspace across Experiments, select the **All runs** tab. You can drill down on runs for specific Experiments by applying the Experiment filter in the top menu bar.
+Navigate to the **Jobs** tab. To view all your jobs in your Workspace across Experiments, select the **All jobs** tab. You can drill down on jobs for specific Experiments by applying the Experiment filter in the top menu bar.
For the individual Experiment view, select the **All experiments** tab. On the experiment run dashboard, you can see tracked metrics and logs for each run.
-You can also edit the run list table to select multiple runs and display either the last, minimum, or maximum logged value for your runs. Customize your charts to compare the logged metrics values and aggregates across multiple runs. You can plot multiple metrics on the y-axis of your chart and customize your x-axis to plot your logged metrics.
+You can also edit the job list table to select multiple jobs and display either the last, minimum, or maximum logged value for your jobs. Customize your charts to compare the logged metrics values and aggregates across multiple jobs. You can plot multiple metrics on the y-axis of your chart and customize your x-axis to plot your logged metrics.
-### View and download log files for a run
+### View and download log files for a job
Log files are an essential resource for debugging the Azure ML workloads. After submitting a training job, drill down to a specific run to view its logs and outputs:
-1. Navigate to the **Experiments** tab.
+1. Navigate to the **Jobs** tab.
1. Select the runID for a specific run. 1. Select **Outputs and logs** at the top of the page. 2. Select **Download all** to download all your logs into a zip folder.
machine-learning How To Machine Learning Fairness Aml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-fairness-aml.md
The following example shows how to use the fairness package. We will upload mode
If you complete the previous steps (uploading generated fairness insights to Azure Machine Learning), you can view the fairness dashboard in [Azure Machine Learning studio](https://ml.azure.com). This dashboard is the same visualization dashboard provided in Fairlearn, enabling you to analyze the disparities among your sensitive feature's subgroups (e.g., male vs. female). Follow one of these paths to access the visualization dashboard in Azure Machine Learning studio:
- * **Experiments pane (Preview)**
- 1. Select **Experiments** in the left pane to see a list of experiments that you've run on Azure Machine Learning.
+ * **Jobs pane (Preview)**
+ 1. Select **Jobs** in the left pane to see a list of experiments that you've run on Azure Machine Learning.
1. Select a particular experiment to view all the runs in that experiment. 1. Select a run, and then the **Fairness** tab to the explanation visualization dashboard. 1. Once landing on the **Fairness** tab, click on a **fairness id** from the menu on the right.
machine-learning How To Manage Environments In Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-in-studio.md
For a high-level overview of how environments work in Azure Machine Learning, se
## Browse curated environments
-Curated environments contain collections of Python packages and are available in your workspace by default. These environments are backed by cached Docker images which reduces the run preparation cost and support training and inferencing scenarios.
+Curated environments contain collections of Python packages and are available in your workspace by default. These environments are backed by cached Docker images which reduces the job preparation cost and support training and inferencing scenarios.
Click on an environment to see detailed information about its contents. For more information, see [Azure Machine Learning curated environments](resource-curated-environments.md).
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
The **Edit and submit** button opens the **Create a new Automated ML run** wizar
Once you have the best model at hand, it is time to deploy it as a web service to predict on new data. >[!TIP]
-> If you are looking to deploy a model that was generated via the `automl` package with the Python SDK, you must [register your model](how-to-deploy-and-where.md?tabs=python#register-a-model-from-an-azure-ml-training-run-1) to the workspace.
+> If you are looking to deploy a model that was generated via the `automl` package with the Python SDK, you must [register your model](how-to-deploy-and-where.md?tabs=python#register-a-model-from-an-azure-ml-training-job-1) to the workspace.
> > Once you're model is registered, find it in the studio by selecting **Models** on the left pane. Once you open your model, you can select the **Deploy** button at the top of the screen, and then follow the instructions as described in **step 2** of the **Deploy your model** section.
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
After you create your Azure Databricks workspace and cluster,
1. Connect your Azure Databricks workspace and Azure Machine Learning workspace.
-Additional detail for these steps are in the following sections so you can successfully run your MLflow experiments with Azure Databricks.
+Additional details for these steps are in the following sections so you can successfully run your MLflow experiments with Azure Databricks.
## Install libraries
To link your ADB workspace to a new or existing Azure Machine Learning workspace
![Link Azure DB and Azure Machine Learning workspaces](./media/how-to-use-mlflow-azure-databricks/link-workspaces.png)
+> [!NOTE]
+> MLflow Tracking in a [private link enabled Azure Machine Learning workspace](how-to-configure-private-link.md) is not supported.
## MLflow Tracking in your workspaces
-After you instantiate your workspace, MLflow Tracking is automatically set to be tracked in all of the following places:
+After you link your Azure Databricks workspace with your Azure Machine Learning workspace, MLflow Tracking is automatically set to be tracked in all of the following places:
* The linked Azure Machine Learning workspace. * Your original ADB workspace.
-All your experiments land in the managed Azure Machine Learning tracking service.
-
-The following code should be in your experiment notebook to get your linked Azure Machine Learning workspace.
-
-This code,
-
-* Gets the details of your Azure subscription to instantiate your Azure Machine Learning workspace.
-
-* Assumes you have an existing resource group and Azure Machine Learning workspace, otherwise you can [create them](how-to-manage-workspace.md).
-
-* Sets the experiment name. The `user_name` here is consistent with the `user_name` associated with the Azure Databricks workspace.
+You can use then MLflow in Azure Databricks in the same way as you are used to. The following example sets the experiment name as it is usually done in Azure Databricks:
```python
-import mlflow
-import mlflow.azureml
-import azureml.mlflow
-import azureml.core
-
-from azureml.core import Workspace
-
-subscription_id = 'subscription_id'
-
-# Azure Machine Learning resource group NOT the managed resource group
-resource_group = 'resource_group_name'
-
-#Azure Machine Learning workspace name, NOT Azure Databricks workspace
-workspace_name = 'workspace_name'
-
-# Instantiate Azure Machine Learning workspace
-ws = Workspace.get(name=workspace_name,
- subscription_id=subscription_id,
- resource_group=resource_group)
+import mlflow
#Set MLflow experiment. experimentName = "/Users/{user_name}/{experiment_folder}/{experiment_name}" mlflow.set_experiment(experimentName)
+```
+In your training script, import `mlflow` to use the MLflow logging APIs, and start logging your run metrics. The following example, logs the epoch loss metric.
+```python
+import mlflow
+mlflow.log_metric('epoch_loss', loss.item())
```
-> [!NOTE]
-> MLflow Tracking in a [private link enabled Azure Machine Learning workspace](how-to-configure-private-link.md) is not supported.
+> [!NOTE]
+> As opposite to tracking, model registries don't support registering models at the same time on both Azure Machine Learning and Azure Databricks. Either one or the other has to be used. Please read the section [Registering models in the registry with MLflow](#registering-models-in-the-registry-with-mlflow) for more details.
### Set MLflow Tracking to only track in your Azure Machine Learning workspace
-If you prefer to manage your tracked experiments in a centralized location, you can set MLflow tracking to **only** track in your Azure Machine Learning workspace.
-
-Include the following code in your script:
+If you prefer to manage your tracked experiments in a centralized location, you can set MLflow tracking to **only** track in your Azure Machine Learning workspace. This configuration has the advantage of enabling easier path to deployment using Azure Machine Learning deployment options.
+
+You have to configure the MLflow tracking URI to point exclusively to Azure Machine Learning, as it is demonstrated in the following example:
+
+ # [Using the Azure ML SDK v2](#tab/sdkv2)
+
+ You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the cluster you are using:
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.identity import DeviceCodeCredential
+
+ subscription_id = ""
+ aml_resource_group = ""
+ aml_workspace_name = ""
+
+ ml_client = MLClient(credential=DeviceCodeCredential(),
+ subscription_id=subscription_id,
+ resource_group_name=aml_resource_group)
+
+ azureml_mlflow_uri = ml_client.workspaces.get(aml_workspace_name).mlflow_tracking_uri
+ mlflow.set_tracking_uri(azureml_mlflow_uri)
+ ```
+
+ # [Building the MLflow tracking URI](#tab/custom)
+
+ The Azure Machine Learning Tracking URI can be constructed using the subscription ID, region of where the resource is deployed, resource group name and workspace name. The following code sample shows how:
+
+ ```python
+ import mlflow
+
+ aml_region = ""
+ subscription_id = ""
+ aml_resource_group = ""
+ aml_workspace_name = ""
+
+ azureml_mlflow_uri = f"azureml://{aml_region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{aml_resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{aml_workspace_name}"
+ mlflow.set_tracking_uri(azureml_mlflow_uri)
+ ```
+
+ > [!NOTE]
+ > You can also get this URL by: Navigate to the [Azure ML Studio web portal](https://ml.azure.com) -> Clic on the uper-right corner of the page -> View all properties in Azure Portal -> MLflow tracking URI.
+
+
+
+#### Experiment's names in Azure Machine Learning
+
+When MLflow is configured to exclusively track experiments in Azure Machine Learning workspace, the experiment's naming convention has to follow the one used by Azure Machine Learning. In Azure Databricks, experiments are named with the path to where the experiment is saved like `/Users/alice@contoso.com/iris-classifier`. However, in Azure Machine Learning, you have to provide the experiment name directly. As in the previous example, the same experiment would be named `iris-classifier` directly:
```python
-uri = ws.get_mlflow_tracking_uri()
-mlflow.set_tracking_uri(uri)
+mlflow.set_experiment(experiment_name="experiment-name")
```
-In your training script, import `mlflow` to use the MLflow logging APIs, and start logging your run metrics. The following example, logs the epoch loss metric.
+## Logging models with MLflow
+
+After your model is trained, you can log it to the tracking server with the `mlflow.<model_flavor>.log_model()` method. `<model_flavor>`, refers to the framework associated with the model. [Learn what model flavors are supported](https://mlflow.org/docs/latest/models.html#model-api). In the following example, a model created with the Spark library MLLib is being registered. It's worth to mention that the flavor `spark` doesn't correspond to the fact that we are training a model in a Spark cluster but because of the training framework it was used (you can perfectly train a model using TensorFlow with Spark and hence the flavor to use would be `tensorflow`.
```python
-import mlflow
-mlflow.log_metric('epoch_loss', loss.item())
+mlflow.spark.log_model(model, artifact_path = "model")
```
-## Register models with MLflow
+Models are logged inside of the run being tracked. That means that models are available in either both Azure Databricks and Azure Machine Learning (default) or exclusively in Azure Machine Learning if you configured the tracking URI to point to it.
-After your model is trained, you can log and register your models to the backend tracking server with the `mlflow.<model_flavor>.log_model()` method. `<model_flavor>`, refers to the framework associated with the model. [Learn what model flavors are supported](https://mlflow.org/docs/latest/models.html#model-api).
+> [!IMPORTANT]
+> Notice that here the parameter `registered_model_name` has not been specified. Read the section [Registering models in the registry with MLflow](#registering-models-in-the-registry-with-mlflow) for more details about the implications of such parameter and how the registry works.
-The backend tracking server is the Azure Databricks workspace by default; unless you chose to [set MLflow Tracking to only track in your Azure Machine Learning workspace](#set-mlflow-tracking-to-only-track-in-your-azure-machine-learning-workspace), then the backend tracking server is the Azure Machine Learning workspace.
+## Registering models in the registry with MLflow
-* **If a registered model with the name doesnΓÇÖt exist**, the method registers a new model, creates version 1, and returns a ModelVersion MLflow object.
+As opposite to tracking, **model registries can't operate** at the same time in Azure Databricks and Azure Machine Learning. Either one or the other has to be used. By default, the Azure Databricks workspace is used for model registries; unless you chose to [set MLflow Tracking to only track in your Azure Machine Learning workspace](#set-mlflow-tracking-to-only-track-in-your-azure-machine-learning-workspace), then the model registry is the Azure Machine Learning workspace.
-* **If a registered model with the name already exists**, the method creates a new model version and returns the version object.
+Then, considering you are using the default configuration, the following line will log a model inside the corresponding runs of both Azure Databricks and Azure Machine Learning, but it will register it only on Azure Databricks:
```python mlflow.spark.log_model(model, artifact_path = "model", registered_model_name = 'model_name')
+```
+
+* **If a registered model with the name doesnΓÇÖt exist**, the method registers a new model, creates version 1, and returns a ModelVersion MLflow object.
-mlflow.sklearn.log_model(model, artifact_path = "model",
- registered_model_name = 'model_name')
+* **If a registered model with the name already exists**, the method creates a new model version and returns the version object.
+
+### Registering models in the Azure Machine Learning Registry with MLflow
+
+At some point you may want to start registering models in Azure Machine Learning. Such configuration has the advantage of enabling all the deployment capabilities of Azure Machine Learning automatically, including no-code-deployment and model management capabilities. In that case, we recommend you to [set MLflow Tracking to only track in your Azure Machine Learning workspace](#set-mlflow-tracking-to-only-track-in-your-azure-machine-learning-workspace). This will remove the ambiguity of where models are being registered.
+
+If you want to continue using the dual-tracking capabilities but register models in Azure Machine Learning you can instruct MLflow to use Azure ML for model registries by configuring the MLflow Model Registry URI. This URI has the exact same format and value that the MLflow tracking URI.
+
+```python
+mlflow.set_registry_uri(azureml_mlflow_uri)
```
-## Create endpoints for MLflow models
+> [!NOTE]
+> The value of `azureml_mlflow_uri` was obtained in the same way it was demostrated in [Set MLflow Tracking to only track in your Azure Machine Learning workspace](#set-mlflow-tracking-to-only-track-in-your-azure-machine-learning-workspace)
+
+For a complete example about this scenario please check the example [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb).
+
+## Deploying and consuming models registered in Azure Machine Learning
-When you are ready to create an endpoint for your ML models. You can deploy as,
+Model registered in Azure Machine Learning Service using MLflow can be consumed as:
-* An Azure Machine Learning Request-Response web service for interactive scoring. This deployment allows you to leverage and apply the Azure Machine Learning model management, and data drift detection capabilities to your production models.
+* An Azure Machine Learning endpoint (real-time and batch): This deployment allows you to leverage Azure Machine Learning deployment capabilities for both real-time and batch inference in Azure Container Instances (ACI) Azure Kubernetes (AKS) or our Managed Inference endpoints (MIR).
-* MLFlow model objects, which can be used in streaming or batch pipelines as Python functions or Pandas UDFs in Azure Databricks workspace.
+* MLFlow model objects or Pandas UDFs, which can be used in Azure Databricks notebooks in streaming or batch pipelines.
### Deploy models to Azure Machine Learning endpoints
-You can leverage the [mlflow.azureml.deploy](https://www.mlflow.org/docs/latest/python_api/mlflow.azureml.html#mlflow.azureml.deploy) API to deploy a model to your Azure Machine Learning workspace. If you only registered the model to the Azure Databricks workspace, as described in the [register models with MLflow](#register-models-with-mlflow) section, specify the `model_name` parameter to register the model into Azure Machine Learning workspace.
+You can leverage the `azureml-mlflow` plugin to deploy a model to your Azure Machine Learning workspace. Check [How to deploy MLflow models](how-to-deploy-mlflow-models.md) page for a complete detail about how to deploy models to the different targets.
-Azure Databricks runs can be deployed to the following endpoints,
-* [Azure Container Instance](how-to-deploy-mlflow-models.md#deploy-to-azure-container-instance-aci)
-* [Azure Kubernetes Service](how-to-deploy-mlflow-models.md#deploy-to-azure-kubernetes-service-aks)
+> [!IMPORTANT]
+> Models need to be registered in Azure Machine Learning registry in order to deploy them. If your models happen to be registered in the MLflow instance inside Azure Databricks, you will have to register them again in Azure Machine Learning. If this is you case, please check the example [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb)
-### Deploy models to ADB endpoints for batch scoring
+### Deploy models to ADB for batch scoring using UDFs
You can choose Azure Databricks clusters for batch scoring. The MLFlow model is loaded and used as a Spark Pandas UDF to score new data.
If you don't plan to use the logged metrics and artifacts in your workspace, the
## Example notebooks
-The [MLflow with Azure Machine Learning notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow) demonstrate and expand upon concepts presented in this article.
+The [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb) demonstrates how to train models in Azure Databricks and deploy them in Azure ML. It also includes how to handle cases where you also want to track the experiments and models with the MLflow instance in Azure Databricks and leverage Azure ML for deployment.
## Next steps * [Deploy MLflow models as an Azure web service](how-to-deploy-mlflow-models.md).
machine-learning How To Use Pipeline Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipeline-ui.md
If your pipeline fails or gets stuck on a node, first view the logs.
The **user_logs folder** contains information about user code generated logs. This folder is open by default, and the **std_log.txt** log is selected. The **std_log.txt** is where your code's logs (for example, print statements) show up.
- The **system_logs folder** contains logs generated by Azure Machine Learning. Learn more about [how to view and download log files for a run](how-to-log-view-metrics.md#view-and-download-log-files-for-a-run).
+ The **system_logs folder** contains logs generated by Azure Machine Learning. Learn more about [how to view and download log files for a run](how-to-log-view-metrics.md#view-and-download-log-files-for-a-job).
:::image type="content" source="./media/how-to-use-pipeline-ui/view-user-log.png" alt-text="Screenshot showing the user logs of a node." lightbox= "./media/how-to-use-pipeline-ui/view-user-log.png":::
mariadb Quickstart Create Mariadb Server Database Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-arm-template.md
read -p "Press [ENTER] to continue: "
For a step-by-step tutorial that guides you through the process of creating an ARM template, see: > [!div class="nextstepaction"]
-> [ Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
+> [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
marketplace Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics.md
description: Access analytic reports to monitor sales, evaluate performance, and
---++ Last updated 06/21/2022
marketplace Azure Vm Get Sas Uri https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-get-sas-uri.md
description: Generate a shared access signature (SAS) URI for a virtual hard dis
++ Last updated 06/23/2021
marketplace Isv App License https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/isv-app-license.md
Previously updated : 05/25/2022 Last updated : 06/23/2022 # ISV app license management
The ISV creates a solution package for the offer that includes license plan info
| Transactable offers | Licensable-only offers | | | - |
-| Customers can manage subscriptions for the Apps they purchased in [Microsoft 365 admin center](https://admin.microsoft.com/), just like they normally do for any of their Microsoft Office or Dynamics subscriptions. | ISVs activate and manage deals in Partner Center [deal registration portal](https://partner.microsoft.com/) |
+| Customers can manage subscriptions for the Apps they purchased in [Microsoft 365 admin center](https://admin.microsoft.com/), just like they normally do for any of their Microsoft Office or Dynamics subscriptions. | ISVs activate and manage deals in Partner Center [deal registration](/partner-center/register-deals) portal |
### Step 5: Assign licenses
marketplace Price Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/price-changes.md
The price change feature supports the following scenarios:
### Supported offer types
-The ability to change prices is available for both public and private plans of all offers transacted through Microsoft: Azure application (Managed App), Software as a service, and Virtual Machine.
+The ability to change prices is available for both public and private plans of offers transacted through Microsoft.
+
+Supported offer types:
+- Azure application (Managed App)
+- Software as a service (SaaS)
+- Azure virtual machine.
+
+Price changes for the following offer types are not yet supported:
+- Dynamics 365 apps on Dataverse and Power Apps
+- Power BI visual
+- Azure container
### Unsupported scenarios and limitations
mysql Concepts Networking Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-vnet.md
Here are some concepts to be familiar with when using virtual networks with MySQ
* If you use Azure API, an Azure Resource Manager template (ARM template), or Terraform, please create private DNS zones that end with `mysql.database.azure.com` and use them while configuring flexible servers with private access. For more information, see the [private DNS zone overview](../../dns/private-dns-overview.md). > [!IMPORTANT]
- > Private DNS zone names must end with `mysql.database.azure.com`.
- >If you are connecting to the Azure Database for MySQL - Flexible sever with SSL and are using an option to perform full verification (sslmode=VERTIFY_IDENTITY) with certificate subject name, use \<servername\>.mysql.database.azure.com in your connection string.
+ > Private DNS zone names must end with `mysql.database.azure.com`. If you are connecting to the Azure Database for MySQL - Flexible sever with SSL and are using an option to perform full verification (sslmode=VERTIFY_IDENTITY) with certificate subject name, use \<servername\>.mysql.database.azure.com in your connection string.
Learn how to create a flexible server with private access (VNet integration) in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md). ## Integration with custom DNS server
-If you are using the custom DNS server then you must use a DNS forwarder to resolve the FQDN of Azure Database for MySQL - Flexible Server. The forwarder IP address should be [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md). The custom DNS server should be inside the VNet or reachable via the VNET's DNS Server setting. Refer to [name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
+If you are using the custom DNS server then you must **use a DNS forwarder to resolve the FQDN of Azure Database for MySQL - Flexible Server**. The forwarder IP address should be [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md). The custom DNS server should be inside the VNet or reachable via the VNET's DNS Server setting. Refer to [name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
+> [!IMPORTANT]
+ > For successful provisioning of the Flexible Server, even if you are using a custom DNS server, **you must not block DNS traffic to [AzurePlatformDNS](../../virtual-network/service-tags-overview.md) using [NSG](../../virtual-network/network-security-groups-overview.md)**.
## Private DNS zone and VNET peering Private DNS zone settings and VNET peering are independent of each other. Please refer to the [Using Private DNS Zone](concepts-networking-vnet.md#using-private-dns-zone) section above for more details on creating and using Private DNS zones.
You can then use the flexible servername (FQDN) to connect from the client appli
* After the flexible server is deployed to a virtual network and subnet, you cannot move it to another virtual network or subnet. You cannot move the virtual network into another resource group or subscription. * Subnet size (address spaces) cannot be increased once resources exist in the subnet * Flexible server doesn't support Private Link. Instead, it uses VNet injection to make flexible server available within a VNet.-
-> [!NOTE]
-> If you are using a custom DNS server, then you must use a DNS forwarder to resolve the following FQDNs:
-> * Azure Database for MySQL - Flexible Server
-> * Azure Storage Resources (for successful provisioning of the Flexible Server)
->
-> Refer to [name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
- ## Next steps * Learn how to enable private access (vnet integration) using the [Azure portal](how-to-manage-virtual-network-portal.md) or [Azure CLI](how-to-manage-virtual-network-cli.md)
mysql Quickstart Create Mysql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-arm-template.md
echo "Press [ENTER] to continue ..."
For a step-by-step tutorial that guides you through the process of creating an ARM template, see: > [!div class="nextstepaction"]
-> [ Tutorial: Create and deploy your first ARM template](../../azure-resource-manager/templates/template-tutorial-create-first-template.md)
+> [Tutorial: Create and deploy your first ARM template](../../azure-resource-manager/templates/template-tutorial-create-first-template.md)
networking Working Remotely Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/working-remotely-support.md
The Microsoft network is designed to meet the requirements and provide optimal p
Azure VPN gateway supports both Point-to-Site (P2S) and Site-to-Site (S2S) VPN connections. Using the Azure VPN gateway you can scale your employee's connections to securely access both your Azure deployed resources and your on-premises resources. For more information, see [How to enable users to work remotely](../vpn-gateway/work-remotely-support.md).
-If you are using Secure Sockets Tunneling Protocol (SSTP), the number of concurrent connections is limited to 128. To get a higher number of connections, we suggest transitioning to OpenVPN or IKEv2. For more information, see [Transition to OpenVPN protocol or IKEv2 from SSTP](../vpn-gateway/ikev2-openvpn-from-sstp.md
-).
+If you are using Secure Sockets Tunneling Protocol (SSTP), the number of concurrent connections is limited to 128. To get a higher number of connections, we suggest transitioning to OpenVPN or IKEv2. For more information, see [Transition to OpenVPN protocol or IKEv2 from SSTP](../vpn-gateway/ikev2-openvpn-from-sstp.md).
To access your resources deployed in Azure, remote developers could use Azure Bastion solution, instead of VPN connection to get secure shell access (RDP or SSH) without requiring public IPs on the VMs being accessed. For more information, see [Work remotely using Azure Bastion](../bastion/work-remotely-support.md).
object-anchors Model Conversion Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/object-anchors/model-conversion-error-codes.md
Title: Model Conversion Error Codes
-description: Model conversion error codes for the Azure Object Anchors service.
+ Title: Model conversion error codes
+description: Learn about model conversion error codes and exception errors in the Azure Object Anchors service, and how to address them.
Previously updated : 04/20/2021- Last updated : 06/10/2022+ + #Customer intent: Explain different modes of model conversion failure and how to recover from them.
-# Model Conversion Error Codes
+# Model conversion error codes
-For common modes of model conversion failure, the `Azure.MixedReality.ObjectAnchors.Conversion.AssetConversionProperties` object obtained from the `Value` field in the `Azure.MixedReality.ObjectAnchors.Conversion.AssetConversionOperation` contains an ErrorCode field of the `ConversionErrorCode` type. This type enumerates these common modes of failure for error message localization, failure recovery, and tips to the user on how the error can be corrected.
+For common modes of model conversion failure, the `Azure.MixedReality.ObjectAnchors.Conversion.AssetConversionProperties` object you get from the `Value` field in the `Azure.MixedReality.ObjectAnchors.Conversion.AssetConversionOperation` contains an `ErrorCode` field of the `ConversionErrorCode` type.
-| Error Code | Description | Mitigation |
+The `ConversionErrorCode` type enumerates the following common modes of model conversion failure. These enumerations are useful for error message localization, failure recovery, and tips to the user on how to correct the error.
+
+| Error code | Description | Mitigation |
| | | |
-| INVALID_ASSET_URI | The asset at the URI provided when starting the conversion job could not be found. | When triggering an asset conversion job, provide an upload URI obtained from the service where the asset to be converted has been uploaded. |
-| INVALID_JOB_ID | The provided ID for the asset conversion job to be created was set to the default all-zero GUID. | If a GUID is specified when creating an asset conversion job, ensure it is not the default all-zero GUID. |
+| INVALID_ASSET_URI | The asset at the URI provided when starting the conversion job couldn't be found. | When triggering an asset conversion job, provide an upload URI you get from the service where the asset to be converted is uploaded. |
+| INVALID_JOB_ID | The provided ID for the asset conversion job was set to the default all-zero GUID. | If a GUID is specified when creating an asset conversion job, make sure it isn't the default all-zero GUID. |
| INVALID_GRAVITY | The gravity vector provided when creating the asset conversion job was a fully zeroed vector. | When starting an asset conversion, provide the gravity vector that corresponds to the uploaded asset. |
-| INVALID_SCALE | The provided scale factor was not a positive non-zero value. | When starting an asset conversion, provide the scalar value that corresponds to the measurement unit scale (with regard to meters) of the uploaded asset. |
-| ASSET_SIZE_TOO_LARGE | The intermediate .PLY file generated from the asset or its serialized equivalent was too large. | Refer to the [asset size guidelines](faq.md) before submitting an asset for conversion to ensure conformity. |
-| ASSET_DIMENSIONS_OUT_OF_BOUNDS | The dimensions of the asset exceeded the physical dimension limit. This can be a sign of an improperly set scale for the asset when creating a job. | Inspect the `ScaledAssetDimensions` property in your `AssetConversionProperties` object: it will contain the actual dimensions of the asset that were calculated after applying scale (in meters). Then, refer to the [asset size guidelines](faq.md) before submitting an asset for conversion to ensure conformity, and ensure the provided scale corresponds to the uploaded asset. |
-| ZERO_FACES | The intermediate .PLY file generated from the asset was determined to have no faces, making it invalid for conversion. | Ensure the asset is a valid mesh. |
-| INVALID_FACE_VERTICES | The intermediate .PLY file generated from the asset contained faces that referenced nonexistent vertices. | Ensure the asset file is validly constructed. |
-| ZERO_TRAJECTORIES_GENERATED | The camera trajectories generated from the uploaded asset were empty. | Refer to the [asset guidelines](faq.md) before submitting an asset for conversion to ensure conformity. |
-| TOO_MANY_RIG_POSES | The number of rig poses in the intermediate .PLY file exceeded service limits. | Refer to the [asset size guidelines](faq.md) before submitting an asset for conversion to ensure conformity. |
-| SERVICE_ERROR | An unknown service error occurred. | Contact a member of the Object Anchors service team if the issue persists: https://github.com/Azure/azure-object-anchors/issues |
-| ASSET_CANNOT_BE_CONVERTED | The provided asset was corrupted, malformed, or otherwise unable to be converted in its provided format. | Ensure the asset is a validly constructed file of the specified type, and refer to the [asset size guidelines](faq.md) before submitting an asset for conversion to ensure conformity. |
-
-Any errors that occur outside the actual asset conversion jobs will be thrown as exceptions. Most notably, the `Azure.RequestFailedException` can be thrown for service calls that receive an unsuccessful (4xx or 5xx) or unexpected HTTP response code. For further details on these exceptions, examine the `Status`, `ErrorCode`, or `Message` fields on the exception.
+| INVALID_SCALE | The provided scale factor wasn't a positive non-zero value. | When starting an asset conversion, provide the scalar value that corresponds to the measurement unit scale, with respect to meters, of the uploaded asset. |
+| ASSET_SIZE_TOO_LARGE | The intermediate PLY file generated from the asset or its serialized equivalent was too large. | Ensure conformity with the [asset size guidelines](faq.md) before submitting an asset for conversion. |
+| ASSET_DIMENSIONS_OUT_OF_BOUNDS | The dimensions of the asset exceeded the physical dimension limit. This error can be a sign of an improperly set scale for the asset when creating a job. | Inspect the `ScaledAssetDimensions` property in your `AssetConversionProperties` object. This property contains the actual dimensions of the asset calculated after applying the scale in meters. Then, ensure conformity with the [asset size guidelines](faq.md) before submitting the asset for conversion. Make sure the provided scale corresponds to the uploaded asset. |
+| ZERO_FACES | The intermediate PLY file generated from the asset was determined to have no faces, making it invalid for conversion. | Ensure the asset is a valid mesh. |
+| INVALID_FACE_VERTICES | The intermediate PLY file generated from the asset contained faces that referenced nonexistent vertices. | Ensure the asset file is validly constructed. |
+| ZERO_TRAJECTORIES_GENERATED | The camera trajectories generated from the uploaded asset were empty. | Ensure conformity with the [asset size guidelines](faq.md) before submitting an asset for conversion. |
+| TOO_MANY_RIG_POSES | The number of rig poses in the intermediate PLY file exceeded service limits. | Ensure conformity with the [asset size guidelines](faq.md) before submitting an asset for conversion. |
+| SERVICE_ERROR | An unknown service error occurred. | [File a GitHub issue to the Object Anchors service team](https://github.com/Azure/azure-object-anchors/issues) if the issue persists. |
+| ASSET_CANNOT_BE_CONVERTED | The provided asset was corrupted, malformed, or otherwise couldn't be converted in its provided format. | Ensure the asset is a validly constructed file of the specified type. Ensure conformity with the [asset size guidelines](faq.md) before submitting the asset for conversion. |
+
+## Exception errors
+
+Any errors that occur outside the actual asset conversion jobs are thrown as exceptions. Most notably, the `Azure.RequestFailedException` can be thrown for service calls that receive an unsuccessful (4xx or 5xx) or unexpected HTTP response code. For further details on these exceptions, examine the `Status`, `ErrorCode`, or `Message` fields on the exception.
| Exception | Cause | | | |
-| ArgumentException | <ul><li>Occurs when using an invalidly constructed or all zero account ID to construct a request with the ObjectAnchorsConversionClient.</li><li>Occurs when attempting to initialize the ObjectAnchorsConversionClient using an invalid whitespace account domain.</li><li>Occurs when an unsupported service version is provided to the ObjectAnchorsConversionClient through ObjectAnchorsConversionClientOptions.</li></ul> |
-| ArgumentNullException | <ul><li>Occurs when attempting to initialize the ObjectAnchorsConversionClient using an invalid null account domain.</li><li>Occurs when attempting to initialize the ObjectAnchorsConversionClient using an invalid null credential.</li></ul> |
-| RequestFailedException | <ul><li>Occurs for all other issues resulting from a bad HTTP status code (unrelated to the status of a job that will/is/has run), such as an account not being found, an invalid upload uri being detected by the fronted, frontend service error, etc.</li></ul> |
-| UnsupportedAssetFileTypeException | <ul><li>Occurs when attempting to submit a job with an asset with an extension or specified filetype that is unsupported by the Azure Object Anchors Conversion service.</li></ul> |
+| ArgumentException | <ul><li>Using an invalidly constructed or all-zero account ID to construct a request with the `ObjectAnchorsConversionClient`.</li><li>Attempting to initialize the `ObjectAnchorsConversionClient` using an invalid whitespace account domain.</li><li>Providing an unsupported service version to the `ObjectAnchorsConversionClient` through `ObjectAnchorsConversionClientOptions`.</li></ul> |
+| ArgumentNullException | <ul><li>Attempting to initialize the `ObjectAnchorsConversionClient` using an invalid null account domain.</li><li>Attempting to initialize the `ObjectAnchorsConversionClient` using an invalid null credential.</li></ul> |
+| RequestFailedException | <ul><li>All other issues resulting from a bad HTTP status code, unrelated to job status. Examples include an account not being found, the front end detecting an invalid upload URI, or a front end service error.</li></ul> |
+| UnsupportedAssetFileTypeException | <ul><li>Submitting an asset with an extension or specified filetype that the Azure Object Anchors Conversion service doesn't support.</li></ul> |
+
+## Next steps
+
+- [Quickstart: Create an Object Anchors model from a 3D model](quickstarts/get-started-model-conversion.md)
+- [Frequently asked questions about Azure Object Anchors](faq.md)
+- [Azure Object Anchors client library for .NET](/dotnet/api/overview/azure/mixedreality.objectanchors.conversion-readme-pre)
openshift Howto Create Private Cluster 4X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-private-cluster-4x.md
Once you're logged into the OpenShift Web Console, click on the **?** on the top
![Image shows Azure Red Hat OpenShift login screen](media/aro4-download-cli.png)
-You can also download the latest release of the CLI appropriate to your machine from <https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/>.
+You can also download the [latest release of the CLI](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/) appropriate to your machine.
## Connect using the OpenShift CLI
openshift Tutorial Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/tutorial-connect-cluster.md
Once you're logged into the OpenShift Web Console, click on the **?** on the top
![Screenshot that highlights the Command Line Tools option in the list when you select the ? icon.](media/aro4-download-cli.png)
-You can also download the latest release of the CLI appropriate to your machine from <https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/>.
+You can also download the [latest release of the CLI](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/) appropriate to your machine.
If you're running the commands on the Azure Cloud Shell, download the latest OpenShift 4 CLI for Linux.
postgresql Concepts Compare Single Server Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compare-single-server-flexible-server.md
The following table provides a list of high-level features and capabilities comp
| Microsoft Defender for Cloud | Yes | No | | Resource health | Yes | Yes | | Service health | Yes | Yes |
-| Performance insights (iPerf) | Yes | Yes |
+| Performance insights (iPerf) | Yes | Yes. Not available in portal |
| Major version upgrades support | No | No | | Minor version upgrades | Yes. Automatic during maintenance window | Yes. Automatic during maintenance window |
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
* Support for choosing [standby availability zone](./how-to-manage-high-availability-portal.md) when deploying zone-redundant high availability. * Support for [extensions](concepts-extensions.md) PLV8, pgrouting with new servers<sup>$</sup> * Version updates for [extension](concepts-extensions.md) PostGIS.
+* General availability of Azure Database for PostgreSQL - Flexible Server in Canada East and Jio India West regions.
<sup>**$**</sup> New servers get these features automatically. In your existing servers, these features are enabled during your server's future maintenance window.
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
- Title: "Migrate from Azure Database for PostgreSQL Single Server to Flexible Server - Concepts"-
-descripti