Updates from: 06/24/2022 01:11:27
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Password Reset Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-password-reset-policy.md
Declare your claims in the [claims schema](claimsschema.md). Open the extensions
</BuildingBlocks> --> ```
-A claims transformation technical profile initiates the **isForgotPassword** claim. The technical profile is referenced later. When invoked, it sets the value of the **isForgotPassword** claim to `true`. Find the **ClaimsProviders** element. If the element doesn't exist, add it. Then add the following claims provider:
+### Add the technical profiles
+A claims transformation technical profile accesses the `isForgotPassword` claim. The technical profile is referenced later. When it's invoked, it sets the value of the `isForgotPassword` claim to `true`. Find the **ClaimsProviders** element (if the element doesn't exist, create it), and then add the following claims provider:
```xml <!--
A claims transformation technical profile initiates the **isForgotPassword** cla
<Item Key="setting.forgotPasswordLinkOverride">ForgotPasswordExchange</Item> </Metadata> </TechnicalProfile>
+ <TechnicalProfile Id="LocalAccountWritePasswordUsingObjectId">
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-AAD" />
+ </TechnicalProfile>
</TechnicalProfiles> </ClaimsProvider> <!--
A claims transformation technical profile initiates the **isForgotPassword** cla
The **SelfAsserted-LocalAccountSignin-Email** technical profile **setting.forgotPasswordLinkOverride** defines the password reset claims exchange that executes in your user journey.
+The **LocalAccountWritePasswordUsingObjectId** technical profile **UseTechnicalProfileForSessionManagement** `SM-AAD` session manager is required for the user to preform subsequent logins successfully under [SSO](./custom-policy-reference-sso.md) conditions.
+ ### Add the password reset sub journey The user can now sign in, sign up, and perform password reset in your user journey. To better organize the user journey, you can use a [sub journey](subjourneys.md) to handle the password reset flow.
active-directory-b2c Conditional Access User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/conditional-access-user-flow.md
The following template can be used to create a Conditional Access policy with di
## Template 3: Block locations with Conditional Access
-With the location condition in Conditional Access, you can control access to your cloud apps based on the network location of a user. More information about the location condition in Conditional Access can be found in the article,
-[Using the location condition in a Conditional Access policy](../active-directory/conditional-access/location-condition.md
-
-Configure Conditional Access through Azure portal or Microsoft Graph APIs to enable a Conditional Access policy blocking access to specific locations.
-For more information about the location condition in Conditional Access can be found in the article, [Using the location condition in a Conditional Access policy](../active-directory/conditional-access/location-condition.md)
+With the location condition in Conditional Access, you can control access to your cloud apps based on the network location of a user. Configure Conditional Access via the Azure portal or Microsoft Graph APIs to enable a Conditional Access policy blocking access to specific locations. For more information, see [Using the location condition in a Conditional Access policy](../active-directory/conditional-access/location-condition.md)
### Define locations
active-directory-b2c Configure Authentication Sample Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-spa-app.md
In your own environment, if your SPA app uses MSAL.js 1.3 or earlier and the imp
1. In the left menu, under **Manage**, select **Authentication**.
-1. Under **Implicit grant and hybrid flows**, select both the **Access tokens (used for implicit flows)** and **D tokens (used for implicit and hybrid flows)** check boxes.
+1. Under **Implicit grant and hybrid flows**, select both the **Access tokens (used for implicit flows)** and **ID tokens (used for implicit and hybrid flows)** check boxes.
1. Select **Save**.
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Use the steps below to provision roles for a user to your application. Note that
- **SingleAppRoleAssignment** - **When to use:** Use the SingleAppRoleAssignment expression to provision a single role for a user and to specify the primary role.
- - **How to configure:** Use the steps described above to navigate to the attribute mappings page and use the SingleAppRoleAssignment expression to map to the roles attribute. There are three role attributes to choose from: (roles[primary eq "True"].display, roles[primary eq "True].type, and roles[primary eq "True"].value). You can choose to include any or all of the role attributes in your mappings. If you would like to include more than one, just add a new mapping and include it as the target attribute.
-
+ - **How to configure:** Use the steps described above to navigate to the attribute mappings page and use the SingleAppRoleAssignment expression to map to the roles attribute. There are three role attributes to choose from (`roles[primary eq "True"].display`, `roles[primary eq "True"].type`, and `roles[primary eq "True"].value`). You can choose to include any or all of the role attributes in your mappings. If you would like to include more than one, just add a new mapping and include it as the target attribute.
+ ![Add SingleAppRoleAssignment](./media/customize-application-attributes/edit-attribute-singleapproleassignment.png) - **Things to consider** - Ensure that multiple roles are not assigned to a user. We cannot guarantee which role will be provisioned.
active-directory Concept Authentication Phone Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-phone-options.md
Previously updated : 06/09/2022 Last updated : 06/23/2022
With phone call verification during SSPR or Azure AD Multi-Factor Authentication
If you have problems with phone authentication for Azure AD, review the following troubleshooting steps: * ΓÇ£You've hit our limit on verification callsΓÇ¥ or ΓÇ£YouΓÇÖve hit our limit on text verification codesΓÇ¥ error messages during sign-in
- * Microsoft may limit repeated authentication attempts that are performed by the same user or organization in a short period of time. This limitation does not apply to the Microsoft Entra Authenticator app or verification codes. If you have hit these limits, you can use the Authenticator App, verification code or try to sign in again in a few minutes.
+ * Microsoft may limit repeated authentication attempts that are performed by the same user or organization in a short period of time. This limitation does not apply to Microsoft Authenticator or verification codes. If you have hit these limits, you can use the Authenticator App, verification code or try to sign in again in a few minutes.
* "Sorry, we're having trouble verifying your account" error message during sign-in * Microsoft may limit or block voice or SMS authentication attempts that are performed by the same user, phone number, or organization due to high number of voice or SMS authentication attempts. If you are experiencing this error, you can try another method, such as Authenticator App or verification code, or reach out to your admin for support. * Blocked caller ID on a single device.
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md
The two-gate policy requires two pieces of authentication data, such as an email
* A custom domain has been configured for your Azure AD tenant, such as *contoso.com*; or * Azure AD Connect is synchronizing identities from your on-premises directory
-You can disable the use of SSPR for administrator accounts using the [Set-MsolCompanySettings](/powershell/module/msonline/set-msolcompanysettings) PowerShell cmdlet. The `-SelfServePasswordResetEnabled $False` parameter disables SSPR for administrators.
+You can disable the use of SSPR for administrator accounts using the [Set-MsolCompanySettings](/powershell/module/msonline/set-msolcompanysettings) PowerShell cmdlet. The `-SelfServePasswordResetEnabled $False` parameter disables SSPR for administrators. Policy changes to disable or enable SSPR for administrator accounts can take up to 60 minutes to take effect.
### Exceptions
active-directory How To Mfa Additional Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-additional-context.md
Title: Use additional context in Microsoft Entra Authenticator notifications (Preview) - Azure Active Directory
+ Title: Use additional context in Microsoft Authenticator notifications (Preview) - Azure Active Directory
description: Learn how to use additional context in MFA notifications Previously updated : 06/08/2022 Last updated : 06/23/2022 # Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
-# How to use additional context in Microsoft Entra Authenticator app notifications (Preview) - Authentication Methods Policy
+# How to use additional context in Microsoft Authenticator app notifications (Preview) - Authentication Methods Policy
-This topic covers how to improve the security of user sign-in by adding the application and location in Microsoft Entra Authenticator app push notifications.
+This topic covers how to improve the security of user sign-in by adding the application and location in Microsoft Authenticator app push notifications.
## Prerequisites
To turn off additional context, you'll need to PATCH remove **displayAppInformat
To enable additional context in the Azure AD portal, complete the following steps:
-1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Entra Authenticator**.
+1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Authenticator**.
1. Select the target users, click the three dots on the right, and click **Configure**. ![Screenshot of how to configure number match.](media/howto-authentication-passwordless-phone/configure.png)
Additional context is not supported for Network Policy Server (NPS).
## Next steps
-[Authentication methods in Azure Active Directory - Microsoft Entra Authenticator app](concept-authentication-authenticator-app.md)
+[Authentication methods in Azure Active Directory - Microsoft Authenticator app](concept-authentication-authenticator-app.md)
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 06/09/2022 Last updated : 06/23/2022
# How to use number matching in multifactor authentication (MFA) notifications (Preview) - Authentication Methods Policy
-This topic covers how to enable number matching in Microsoft Entra Authenticator push notifications to improve user sign-in security.
+This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security.
>[!NOTE] >Number matching is a key security upgrade to traditional second factor notifications in the Authenticator app that will be enabled by default for all tenants a few months after general availability (GA).<br>
To turn number matching off, you will need to PATCH remove **numberMatchingRequi
To enable number matching in the Azure AD portal, complete the following steps:
-1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Entra Authenticator**.
+1. In the Azure AD portal, click **Security** > **Authentication methods** > **Microsoft Authenticator**.
1. Select the target users, click the three dots on the right, and click **Configure**. ![Screenshot of configuring number match.](media/howto-authentication-passwordless-phone/configure.png)
active-directory How To Mfa Registration Campaign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md
Title: Nudge users to set up Microsoft Entra Authenticator app - Azure Active Directory
-description: Learn how to move your organization away from less secure authentication methods to the Microsoft Entra Authenticator app
+ Title: Nudge users to set up Microsoft Authenticator - Azure Active Directory
+description: Learn how to move your organization away from less secure authentication methods to Microsoft Authenticator
Previously updated : 06/09/2022 Last updated : 06/23/2022
-# Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Entra Authenticator app in Azure AD to improve and secure user sign-in events.
+# Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
-# How to run a registration campaign to set up Microsoft Entra Authenticator - Microsoft Entra Authenticator app
+# How to run a registration campaign to set up Microsoft Authenticator - Microsoft Authenticator
-You can nudge users to set up the Microsoft Entra Authenticator app during sign-in. Users will go through their regular sign-in, perform multifactor authentication as usual, and then be prompted to set up the Microsoft Entra Authenticator app. You can include or exclude users or groups to control who gets nudged to set up the app. This allows targeted campaigns to move users from less secure authentication methods to the Authenticator app.
+You can nudge users to set up Microsoft Authenticator during sign-in. Users will go through their regular sign-in, perform multifactor authentication as usual, and then be prompted to set up Microsoft Authenticator. You can include or exclude users or groups to control who gets nudged to set up the app. This allows targeted campaigns to move users from less secure authentication methods to the Authenticator app.
In addition to choosing who can be nudged, you can define how many days a user can postpone, or "snooze", the nudge. If a user taps **Not now** to snooze the app setup, they'll be nudged again on the next MFA attempt after the snooze duration has elapsed.
In addition to choosing who can be nudged, you can define how many days a user c
- Users can't have already set up the Authenticator app for push notifications on their account. - Admins need to enable users for the Authenticator app using one of these policies: - MFA Registration Policy: Users will need to be enabled for **Notification through mobile app**.
- - Authentication Methods Policy: Users will need to be enabled for the Authenticator app and the Authentication mode set to **Any** or **Push**. If the policy is set to **Passwordless**, the user won't be eligible for the nudge. For more information about how to set the Authentication mode, see [Enable passwordless sign-in with the Microsoft Entra Authenticator app](howto-authentication-passwordless-phone.md).
+ - Authentication Methods Policy: Users will need to be enabled for the Authenticator app and the Authentication mode set to **Any** or **Push**. If the policy is set to **Passwordless**, the user won't be eligible for the nudge. For more information about how to set the Authentication mode, see [Enable passwordless sign-in with Microsoft Authenticator](howto-authentication-passwordless-phone.md).
## User experience
In addition to choosing who can be nudged, you can define how many days a user c
1. User taps **Next** and steps through the Authenticator app setup. 1. First download the app.
- ![User downloads the Microsoft Entra Authenticator app](./media/how-to-nudge-authenticator-app/download.png)
+ ![User downloads Microsoft Authenticator](./media/how-to-nudge-authenticator-app/download.png)
1. See how to set up the Authenticator app.
- ![User sets up the Microsoft Entra Authenticator app](./media/how-to-nudge-authenticator-app/setup.png)
+ ![User sets up Microsoft Authenticator](./media/how-to-nudge-authenticator-app/setup.png)
1. Scan the QR Code.
It's the same as snoozing.
## Next steps
-[Enable passwordless sign-in with the Microsoft Entra Authenticator app](howto-authentication-passwordless-phone.md)
+[Enable passwordless sign-in with Microsoft Authenticator](howto-authentication-passwordless-phone.md)
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Title: Passwordless sign-in with the Microsoft Entra Authenticator app - Azure Active Directory
-description: Enable passwordless sign-in to Azure AD using the Microsoft Entra Authenticator app
+ Title: Passwordless sign-in with Microsoft Authenticator - Azure Active Directory
+description: Enable passwordless sign-in to Azure AD using Microsoft Authenticator
Previously updated : 06/15/2022 Last updated : 06/23/2022
-# Enable passwordless sign-in with the Microsoft Entra Authenticator app
+# Enable passwordless sign-in with Microsoft Authenticator
-The Microsoft Entra Authenticator app can be used to sign in to any Azure AD account without using a password. Microsoft Authenticator uses key-based authentication to enable a user credential that is tied to a device, where the device uses a PIN or biometric. [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-identity-verification) uses a similar technology.
+Microsoft Authenticator can be used to sign in to any Azure AD account without using a password. Microsoft Authenticator uses key-based authentication to enable a user credential that is tied to a device, where the device uses a PIN or biometric. [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-identity-verification) uses a similar technology.
This authentication technology can be used on any device platform, including mobile. This technology can also be used with any app or website that integrates with Microsoft Authentication Libraries.
To use passwordless authentication in Azure AD, first enable the combined regist
### Enable passwordless phone sign-in authentication methods
-Azure AD lets you choose which authentication methods can be used during the sign-in process. Users then register for the methods they'd like to use. The **Microsoft Entra Authenticator** authentication method policy manages both the traditional push MFA method, as well as the passwordless authentication method.
+Azure AD lets you choose which authentication methods can be used during the sign-in process. Users then register for the methods they'd like to use. The **Microsoft Authenticator** authentication method policy manages both the traditional push MFA method, as well as the passwordless authentication method.
To enable the authentication method for passwordless phone sign-in, complete the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com) with an *authentication policy administrator* account. 1. Search for and select *Azure Active Directory*, then browse to **Security** > **Authentication methods** > **Policies**.
-1. Under **Microsoft Entra Authenticator**, choose the following options:
+1. Under **Microsoft Authenticator**, choose the following options:
1. **Enable** - Yes or No 1. **Target** - All users or Select users 1. Each added group or user is enabled by default to use Microsoft Authenticator in both passwordless and push notification modes ("Any" mode). To change this, for each row:
Users register themselves for the passwordless authentication method of Azure AD
1. Sign in, then click **Add method** > **Authenticator app** > **Add** to add Microsoft Authenticator. 1. Follow the instructions to install and configure the Microsoft Authenticator app on your device. 1. Select **Done** to complete Authenticator configuration.
-1. In **Microsoft Entra Authenticator**, choose **Enable phone sign-in** from the drop-down menu for the account registered.
+1. In **Microsoft Authenticator**, choose **Enable phone sign-in** from the drop-down menu for the account registered.
1. Follow the instructions in the app to finish registering the account for passwordless phone sign-in.
-An organization can direct its users to sign in with their phones, without using a password. For further assistance configuring Microsoft Authenticator and enabling phone sign-in, see [Sign in to your accounts using the Microsoft Entra Authenticator app](https://support.microsoft.com/account-billing/sign-in-to-your-accounts-using-the-microsoft-authenticator-app-582bdc07-4566-4c97-a7aa-56058122714c).
+An organization can direct its users to sign in with their phones, without using a password. For further assistance configuring Microsoft Authenticator and enabling phone sign-in, see [Sign in to your accounts using the Microsoft Authenticator app](https://support.microsoft.com/account-billing/sign-in-to-your-accounts-using-the-microsoft-authenticator-app-582bdc07-4566-4c97-a7aa-56058122714c).
> [!NOTE] > Users who aren't allowed by policy to use phone sign-in are no longer able to enable it within Microsoft Authenticator.
The user is then presented with a number. The app prompts the user to authentica
After the user has utilized passwordless phone sign-in, the app continues to guide the user through this method. However, the user will see the option to choose another method. ## Known Issues
active-directory Howto Mfaserver Deploy Mobileapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy-mobileapp.md
Title: Azure MFA Server Mobile App Web Service - Azure Active Directory
-description: Configure MFA server to send push notifications to users with the Microsoft Entra Authenticator App.
+description: Configure MFA server to send push notifications to users with the Microsoft Authenticator App.
Previously updated : 06/09/2022 Last updated : 06/23/2022
# Enable mobile app authentication with Azure Multi-Factor Authentication Server
-The Microsoft Entra Authenticator app offers an additional out-of-band verification option. Instead of placing an automated phone call or SMS to the user during login, Azure Multi-Factor Authentication pushes a notification to the Authenticator app on the user's smartphone or tablet. The user simply taps **Verify** (or enters a PIN and taps "Authenticate") in the app to complete their sign-in.
+The Microsoft Authenticator app offers an additional out-of-band verification option. Instead of placing an automated phone call or SMS to the user during login, Azure Multi-Factor Authentication pushes a notification to the Authenticator app on the user's smartphone or tablet. The user simply taps **Verify** (or enters a PIN and taps "Authenticate") in the app to complete their sign-in.
Using a mobile app for two-step verification is preferred when phone reception is unreliable. If you use the app as an OATH token generator, it doesn't require any network or internet connection.
active-directory Block Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/block-legacy-authentication.md
description: Learn how to improve your security posture by blocking legacy authe
Previously updated : 02/14/2022 Last updated : 06/21/2022
Alex Weinert, Director of Identity Security at Microsoft, in his March 12, 2020
> For MFA to be effective, you also need to block legacy authentication. This is because legacy authentication protocols like POP, SMTP, IMAP, and MAPI can't enforce MFA, making them preferred entry points for adversaries attacking your organization... >
->...The numbers on legacy authentication from an analysis of Azure Active Directory (Azure AD) traffic are stark:
+> ...The numbers on legacy authentication from an analysis of Azure Active Directory (Azure AD) traffic are stark:
> > - More than 99 percent of password spray attacks use legacy authentication protocols > - More than 97 percent of credential stuffing attacks use legacy authentication
This article assumes that you're familiar with the [basic concepts](overview.md)
## Scenario description
-Azure AD supports several of the most widely used authentication and authorization protocols including legacy authentication. Legacy authentication refers to basic authentication, a widely used industry-standard method for collecting user name and password information. Typically, legacy authentication clients can't enforce any type of second factor authentication. Examples of applications that commonly or only use legacy authentication are:
+Azure AD supports the most widely used authentication and authorization protocols including legacy authentication. Legacy authentication can't prompt users for second factor authentication or other authentication requirements needed to satisfy conditional access policies, directly. This authentication pattern includes basic authentication, a widely used industry-standard method for collecting user name and password information. Examples of applications that commonly or only use legacy authentication are:
- Microsoft Office 2013 or older. - Apps using mail protocols like POP, IMAP, and SMTP AUTH.
Single factor authentication (for example, username and password) isn't enough t
How can you prevent apps using legacy authentication from accessing your tenant's resources? The recommendation is to just block them with a Conditional Access policy. If necessary, you allow only certain users and specific network locations to use apps that are based on legacy authentication.
-Conditional Access policies are enforced after the first-factor authentication has been completed. Therefore, Conditional Access isn't intended as a first line defense for scenarios like denial-of-service (DoS) attacks, but can utilize signals from these events (for example, the sign-in risk level, location of the request, and so on) to determine access.
- ## Implementation
-This section explains how to configure a Conditional Access policy to block legacy authentication.
+This section explains how to configure a Conditional Access policy to block legacy authentication.
### Messaging protocols that support legacy authentication
For more information about these authentication protocols and services, see [Sig
### Identify legacy authentication use
-Before you can block legacy authentication in your directory, you need to first understand if your users have apps that use legacy authentication and how it affects your overall directory. Azure AD sign-in logs can be used to understand if you're using legacy authentication.
+Before you can block legacy authentication in your directory, you need to first understand if your users have clients that use legacy authentication. Below, you'll find useful information to identify and triage where clients are using legacy authentication.
+
+#### Indicators from Azure AD
1. Navigate to the **Azure portal** > **Azure Active Directory** > **Sign-in logs**. 1. Add the Client App column if it isn't shown by clicking on **Columns** > **Client App**.
Before you can block legacy authentication in your directory, you need to first
Filtering will only show you sign-in attempts that were made by legacy authentication protocols. Clicking on each individual sign-in attempt will show you more details. The **Client App** field under the **Basic Info** tab will indicate which legacy authentication protocol was used.
-These logs will indicate which users are still depending on legacy authentication and which applications are using legacy protocols to make authentication requests. For users that don't appear in these logs and are confirmed to not be using legacy authentication, implement a Conditional Access policy for these users only.
+These logs will indicate where users are using clients that are still depending on legacy authentication. For users that don't appear in these logs and are confirmed to not be using legacy authentication, implement a Conditional Access policy for these users only.
+
+Additionally, to help triage legacy authentication within your tenant use the [Sign-ins using legacy authentication workbook](../reports-monitoring/workbook-legacy%20authentication.md).
+
+#### Indicators from client
+
+To determine if a client is using legacy or modern authentication based on the dialog box presented at sign-in, see the article [Deprecation of Basic authentication in Exchange Online](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online#authentication-dialog).
+
+## Important considerations
+
+Many clients that previously only supported legacy authentication now support modern authentication. Clients that support both legacy and modern authentication may require configuration update to move from legacy to modern authentication. If you see **modern mobile**, **desktop client** or **browser** for a client in the Azure AD logs, it's using modern authentication. If it has a specific client or protocol name, such as **Exchange ActiveSync**, it's using legacy authentication. The client types in Conditional Access, Azure AD Sign-in logs, and the legacy authentication workbook distinguish between modern and legacy authentication clients for you.
+
+- Clients that support modern authentication but aren't configured to use modern authentication should be updated or reconfigured to use modern authentication.
+- All clients that don't support modern authentication should be replaced.
+
+> [!IMPORTANT]
+>
+> **Exchange Active Sync with Certificate-based authentication(CBA)**
+>
+> When implementing Exchange Active Sync (EAS) with CBA, configure clients to use modern authentication. Clients not using modern authentication for EAS with CBA **are not blocked** with [Deprecation of Basic authentication in Exchange Online](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online). However, these clients **are blocked** by Conditional Access policies configured to block legacy authentication.
+>
+>For more Information on implementing support for CBA with Azure AD and modern authentication See: [How to configure Azure AD certificate-based authentication (Preview)](../authentication/how-to-certificate-based-authentication.md). As another option, CBA performed at a federation server can be used with modern authentication.
++
+If you're using Microsoft Intune, you might be able to change the authentication type using the email profile you push or deploy to your devices. If you're using iOS devices (iPhones and iPads), you should take a look at [Add e-mail settings for iOS and iPadOS devices in Microsoft Intune](/mem/intune/configuration/email-settings-ios).
-## Block legacy authentication
+## Block legacy authentication
There are two ways to use Conditional Access policies to block legacy authentication.
You can select all available grant controls for the **Other clients** condition;
- [Determine impact using Conditional Access report-only mode](howto-conditional-access-insights-reporting.md) - If you aren't familiar with configuring Conditional Access policies yet, see [require MFA for specific apps with Azure Active Directory Conditional Access](../authentication/tutorial-enable-azure-mfa.md) for an example. - For more information about modern authentication support, see [How modern authentication works for Office client apps](/office365/enterprise/modern-auth-for-office-2013-and-2016) -- [How to set up a multifunction device or application to send email using Microsoft 365](/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365)
+- [How to set up a multifunction device or application to send email using Microsoft 365](/exchange/mail-flow-best-practices/how-to-set-up-a-multifunction-device-or-application-to-send-email-using-microsoft-365-or-office-365)
active-directory Howto Continuous Access Evaluation Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-continuous-access-evaluation-troubleshoot.md
description: Troubleshoot and respond to changes in user state faster with conti
- Previously updated : 01/25/2022+ Last updated : 06/09/2022
The potential IP address mismatch between Azure AD & resource provider table all
This workbook table sheds light on these scenarios by displaying the respective IP addresses and whether a CAE token was issued during the session.
+#### Continuous access evaluation insights per sign-in
+
+The continuous access evaluation insights per sign-in page in the workbook connects multiple requests from the sign-in logs and displays a single request where a CAE token was issued.
+
+This workbook can come in handy, for example, when: A user opens Outlook on their desktop and attempts to access resources inside of Exchange Online. This sign-in action may map to multiple interactive and non-interactive sign-in requests in the logs making issues hard to diagnose.
+ #### IP address configuration Your identity provider and resource providers may see different IP addresses. This mismatch may happen because of the following examples:
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
Now that youΓÇÖve created the VM, you need to configure Azure RBAC policy to det
- **Virtual Machine Administrator Login**: Users with this role assigned can log in to an Azure virtual machine with administrator privileges. - **Virtual Machine User Login**: Users with this role assigned can log in to an Azure virtual machine with regular user privileges.
-To log in to a VM over SSH, you must have the Virtual Machine Administrator Login or Virtual Machine User Login role. An Azure user with the Owner or Contributor roles assigned for a VM donΓÇÖt automatically have privileges to Azure AD login to the VM over SSH. This separation is to provide audited separation between the set of people who control virtual machines versus the set of people who can access virtual machines.
+To log in to a VM over SSH, you must have the Virtual Machine Administrator Login or Virtual Machine User Login role to the Resource Group containing the VM and its associated Virtual Network, Network Interface, Public IP Address or Load Balancer resources. An Azure user with the Owner or Contributor roles assigned for a VM donΓÇÖt automatically have privileges to Azure AD login to the VM over SSH. This separation is to provide audited separation between the set of people who control virtual machines versus the set of people who can access virtual machines.
There are multiple ways you can configure role assignments for VM, as an example you can use:
There are multiple ways you can configure role assignments for VM, as an example
To configure role assignments for your Azure AD enabled Linux VMs:
+1. Select the **Resource Group** containing the VM and its associated Virtual Network, Network Interface, Public IP Address or Load Balancer resource.
+ 1. Select **Access control (IAM)**. 1. Select **Add** > **Add role assignment** to open the Add role assignment page.
The following example uses [az role assignment create](/cli/azure/role/assignmen
```azurecli-interactive username=$(az account show --query user.name --output tsv)
-vm=$(az vm show --resource-group AzureADLinuxVM --name myVM --query id -o tsv)
+rg=$(az group show --resource-group myResourceGroup --query id -o tsv)
az role assignment create \ --role "Virtual Machine Administrator Login" \ --assignee $username \
- --scope $vm
+ --scope $rg
``` > [!NOTE]
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Now that you've created the VM, you need to configure Azure RBAC policy to deter
- **Virtual Machine User Login**: Users with this role assigned can log in to an Azure virtual machine with regular user privileges. > [!NOTE]
-> To allow a user to log in to the VM over RDP, you must assign either the Virtual Machine Administrator Login or Virtual Machine User Login role. An Azure user with the Owner or Contributor roles assigned for a VM do not automatically have privileges to log in to the VM over RDP. This is to provide audited separation between the set of people who control virtual machines versus the set of people who can access virtual machines.
+> To allow a user to log in to the VM over RDP, you must assign either the Virtual Machine Administrator Login or Virtual Machine User Login role to the Resource Group containing the VM and its associated Virtual Network, Network Interface, Public IP Address or Load Balancer resources. An Azure user with the Owner or Contributor roles assigned for a VM do not automatically have privileges to log in to the VM over RDP. This is to provide audited separation between the set of people who control virtual machines versus the set of people who can access virtual machines.
There are multiple ways you can configure role assignments for VM:
There are multiple ways you can configure role assignments for VM:
To configure role assignments for your Azure AD enabled Windows Server 2019 Datacenter VMs:
+1. Select the **Resource Group** containing the VM and its associated Virtual Network, Network Interface, Public IP Address or Load Balancer resource.
+ 1. Select **Access control (IAM)**. 1. Select **Add** > **Add role assignment** to open the Add role assignment page.
The following example uses [az role assignment create](/cli/azure/role/assignmen
```AzureCLI $username=$(az account show --query user.name --output tsv)
-$vm=$(az vm show --resource-group myResourceGroup --name myVM --query id -o tsv)
+$rg=$(az group show --resource-group myResourceGroup --query id -o tsv)
az role assignment create \ --role "Virtual Machine Administrator Login" \ --assignee $username \
- --scope $vm
+ --scope $rg
``` > [!NOTE]
active-directory Active Directory Compare Azure Ad To Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-compare-azure-ad-to-ad.md
Most IT administrators are familiar with Active Directory Domain Services concep
| Admin management|Organizations will use a combination of domains, organizational units, and groups in AD to delegate administrative rights to manage the directory and resources it controls.| Azure AD provides [built-in roles](./active-directory-users-assign-role-azure-portal.md) with its Azure AD role-based access control (Azure AD RBAC) system, with limited support for [creating custom roles](../roles/custom-overview.md) to delegate privileged access to the identity system, the apps, and resources it controls.</br>Managing roles can be enhanced with [Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md) to provide just-in-time, time-restricted, or workflow-based access to privileged roles. | | Credential management| Credentials in Active Directory are based on passwords, certificate authentication, and smartcard authentication. Passwords are managed using password policies that are based on password length, expiry, and complexity.|Azure AD uses intelligent [password protection](../authentication/concept-password-ban-bad.md) for cloud and on-premises. Protection includes smart lockout plus blocking common and custom password phrases and substitutions. </br>Azure AD significantly boosts security [through Multi-factor authentication](../authentication/concept-mfa-howitworks.md) and [passwordless](../authentication/concept-authentication-passwordless.md) technologies, like FIDO2. </br>Azure AD reduces support costs by providing users a [self-service password reset](../authentication/concept-sspr-howitworks.md) system. | | **Apps**|||
-| Infrastructure apps|Active Directory forms the basis for many infrastructure on-premises components, for example, DNS, DHCP, IPSec, WiFi, NPS, and VPN access|In a new cloud world, Azure AD, is the new control plane for accessing apps versus relying on networking controls. When users authenticate[, Conditional access (CA)](../conditional-access/overview.md), will control which users, will have access to which apps under required conditions.|
+| Infrastructure apps|Active Directory forms the basis for many infrastructure on-premises components, for example, DNS, DHCP, IPSec, WiFi, NPS, and VPN access|In a new cloud world, Azure AD, is the new control plane for accessing apps versus relying on networking controls. When users authenticate, [Conditional access (CA)](../conditional-access/overview.md) controls which users have access to which apps under required conditions.|
| Traditional and legacy apps| Most on-premises apps use LDAP, Windows-Integrated Authentication (NTLM and Kerberos), or Header-based authentication to control access to users.| Azure AD can provide access to these types of on-premises apps using [Azure AD application proxy](../app-proxy/application-proxy.md) agents running on-premises. Using this method Azure AD can authenticate Active Directory users on-premises using Kerberos while you migrate or need to coexist with legacy apps. | | SaaS apps|Active Directory doesn't support SaaS apps natively and requires federation system, such as AD FS.|SaaS apps supporting OAuth2, SAML, and WS-\* authentication can be integrated to use Azure AD for authentication. | | Line of business (LOB) apps with modern authentication|Organizations can use AD FS with Active Directory to support LOB apps requiring modern authentication.| LOB apps requiring modern authentication can be configured to use Azure AD for authentication. |
active-directory Five Steps To Full Application Integration With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/five-steps-to-full-application-integration-with-azure-ad.md
We have published guidance for managing the business process of integrating apps
A good place to start is by evaluating your use of Active Directory Federation Services (ADFS). Many organizations use ADFS for authentication with SaaS apps, custom Line-of-Business apps, and Microsoft 365 and Azure AD-based apps:
-![Diagram shows on-premises apps, line of business apps, SaaS apps, and, via Azure AD, Office 365 all connecting with dotted lines into Active Directory and AD FS.](\media\five-steps-to-full-application-integration-with-azure-ad\adfs-integration-1.png)
+![Diagram shows on-premises apps, line of business apps, SaaS apps, and, via Azure AD, Office 365 all connecting with dotted lines into Active Directory and AD FS.](./media/five-steps-to-full-application-integration-with-azure-ad/adfs-integration-1.png)
You can upgrade this configuration by [replacing ADFS with Azure AD as the center](../manage-apps/migrate-adfs-apps-to-azure.md) of your identity management solution. Doing so enables sign-on for every app your employees want to access, and makes it easy for employees to find any business application they need via the [MyApps portal](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510), in addition to the other benefits mentioned above.
-![Diagram shows on-premises apps via Active Directory and AD FS, line of business apps, SaaS apps, and Office 365 all connecting with dotted lines into Azure Active Directory.](\media\five-steps-to-full-application-integration-with-azure-ad\adfs-integration-2.png)
+![Diagram shows on-premises apps via Active Directory and AD FS, line of business apps, SaaS apps, and Office 365 all connecting with dotted lines into Azure Active Directory.](./media/five-steps-to-full-application-integration-with-azure-ad/adfs-integration-2.png)
Once Azure AD becomes the central identity provider, you may be able to switch from ADFS completely, rather than using a federated solution. Apps that previously used ADFS for authentication can now use Azure AD alone.
-![Diagram shows on-premises, line of business apps, SaaS apps, and Office 365 all connecting with dotted lines into Azure Active Directory. Active Directory and AD FS is not present.](\media\five-steps-to-full-application-integration-with-azure-ad\adfs-integration-3.png)
+![Diagram shows on-premises, line of business apps, SaaS apps, and Office 365 all connecting with dotted lines into Azure Active Directory. Active Directory and AD FS is not present.](./media/five-steps-to-full-application-integration-with-azure-ad/adfs-integration-3.png)
You can also migrate apps that use a different cloud-based identity provider to Azure AD. Your organization may have multiple Identity Access Management (IAM) solutions in place. Migrating to one Azure AD infrastructure is an opportunity to reduce dependencies on IAM licenses (on-premises or in the cloud) and infrastructure costs. In cases where you may have already paid for Azure AD via M365 licenses, there is no reason to pay the added cost of another IAM solution.
active-directory Tutorial Vm Managed Identities Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos.md
New-AzVm `
```
-The user assigned managed identity should be specified using its [resourceID](how-manage-user-assigned-managed-identities.md
-).
+The user assigned managed identity should be specified using its [resourceID](./how-manage-user-assigned-managed-identities.md).
# [Azure CLI](#tab/azure-cli)
active-directory Administrative Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/administrative-units.md
Previously updated : 06/21/2022 Last updated : 06/23/2022
The following sections describe current support for administrative unit scenario
| Permissions | Microsoft Graph/PowerShell | Azure portal | Microsoft 365 admin center | | | :: | :: | :: | | Create or delete administrative units | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Add or remove members individually | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Add or remove members in bulk | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| Add or remove members | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Assign administrative unit-scoped administrators | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Add or remove users or devices dynamically based on rules (Preview) | :heavy_check_mark: | :heavy_check_mark: | :x: | | Add or remove groups dynamically based on rules | :x: | :x: | :x: |
active-directory Bgsonline Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bgsonline-tutorial.md
To configure Azure AD single sign-on with BGS Online, perform the following step
For test environment, use this pattern `https://millwardbrown.marketingtracker.nl/mt5/sso/saml/AssertionConsumerService.aspx` > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [BGS Online support team](mailTo:bgsdashboardteam@millwardbrown.com) to get these values.
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [BGS Online support team](mailto:bgsdashboardteam@millwardbrown.com) to get these values.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
active-directory Tableau Online Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tableau-online-provisioning-tutorial.md
Before you configure and enable automatic user provisioning, decide which users
This section guides you through the steps to configure the Azure AD provisioning service. Use it to create, update, and disable users or groups in Tableau Online based on user or group assignments in Azure AD. > [!TIP]
-> You also can enable SAML-based single sign-on for Tableau Online. Follow the instructions in the [Tableau Online single sign-on tutorial](tableauonline-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, although these two features complement each other.
+> You also can enable SAML-based Single Sign-On for Tableau Online. Follow the instructions in the [Tableau Online single sign-on tutorial](tableauonline-tutorial.md). Single sign-on can be configured independently of automatic user provisioning, although these two features complement each other.
### Configure automatic user provisioning for Tableau Online in Azure AD
You can use the **Synchronization Details** section to monitor progress and foll
For information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md).
+## Update a Tableau Cloud application to use the Tableau Cloud SCIM 2.0 endpoint
+In June 2022, Tableau released a SCIM 2.0 connector. Completing the steps below will update applications configured to use the Tableau API endpoint to the use the SCIM 2.0 endpoint. These steps will remove any customizations previously made to the Tableau Cloud application, including:
+* Authentication details
+* Scoping filters
+* Custom attribute mappings
+
+> [!NOTE]
+> Be sure to note any changes that have been made to the settings listed above before completing the steps below. Failure to do so will result in the loss of customized settings.
+
+1. Sign into the Azure portal at https://portal.azure.com
+2. Navigate to your current Tableau Cloud app under Azure Active Directory > Enterprise Applications
+3. In the Properties section of your new custom app, copy the Object ID.
+
+ ![Screenshot of Tableau Cloud app in the Azure portal.](./media/tableau-online-provisioning-tutorial/tableau-cloud-properties.png)
+
+4. In a new web browser window, go to https://developer.microsoft.com/graph/graph-explorer and sign in as the administrator for the Azure AD tenant where your app is added.
+
+ ![Screenshot of Microsoft Graph explorer sign in page.](./media/workplace-by-facebook-provisioning-tutorial/permissions.png)
+
+5. Check to make sure the account being used has the correct permissions. The permission ΓÇ£Directory.ReadWrite.AllΓÇ¥ is required to make this change.
+
+ ![Screenshot of Microsoft Graph settings option.](./media/workplace-by-facebook-provisioning-tutorial/permissions-2.png)
+
+ ![Screenshot of Microsoft Graph permissions.](./media/workplace-by-facebook-provisioning-tutorial/permissions-3.png)
+
+6. Using the ObjectID selected from the app previously, run the following command:
+
+```
+GET https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs/
+```
+
+7. Taking the "id" value from the response body of the GET request from above, run the command below, replacing "[job-id]" with the id value from the GET request. The value should have the format of "Tableau.xxxxxxxxxxxxxxx.xxxxxxxxxxxxxxx":
+```
+DELETE https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs/[job-id]
+```
+8. In the Graph Explorer, run the command below. Replace "[object-id]" with the service principal ID (object ID) copied from the third step.
+```
+POST https://graph.microsoft.com/beta/servicePrincipals/[object-id]/synchronization/jobs { "templateId": "TableauOnlineSCIM" }
+```
+
+![Screenshot of Microsoft Graph request.](./media/tableau-online-provisioning-tutorial/tableau-cloud-graph.png)
+
+9. Return to the first web browser window and select the Provisioning tab for your application. Your configuration will have been reset. You can confirm the upgrade has taken place by confirming the Job ID starts with ΓÇ£TableauOnlineSCIMΓÇ¥.
+
+10. Under the Admin Credentials section, select "Bearer Authentication" as the authentication method and enter the Tenant URL and Secret Token of the Tableau instance you wish to provision to.
+![Screenshot of Admin Credentials in Tableau Cloud in the Azure portal.](./media/tableau-online-provisioning-tutorial/tableau-cloud-creds.png)
+
+11. Restore any previous changes you made to the application (Authentication details, Scoping filters, Custom attribute mappings) and re-enable provisioning.
+
+> [!NOTE]
+> Failure to restore the previous settings may results in attributes (name.formatted for example) updating in Workplace unexpectedly. Be sure to check the configuration before enabling provisioning
+ ## Change log * 09/30/2020 - Added support for attribute "authSetting" for Users.
For information on how to read the Azure AD provisioning logs, see [Reporting on
<!--Image references--> [1]: ./media/tableau-online-provisioning-tutorial/tutorial_general_01.png [2]: ./media/tableau-online-provisioning-tutorial/tutorial_general_02.png
-[3]: ./media/tableau-online-provisioning-tutorial/tutorial_general_03.png
+[3]: ./media/tableau-online-provisioning-tutorial/tutorial_general_03.png
active-directory Voyance Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/voyance-tutorial.md
In this section, you enable Britta Simon to use Azure single sign-on by granting
In this section, a user called Britta Simon is created in Voyance. Voyance supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Voyance, a new one is created after authentication. >[!NOTE]
->If you need to create a user manually, you need to contact [Voyance support team](maiLto:support@nyansa.com).
+>If you need to create a user manually, you need to contact [Voyance support team](mailto:support@nyansa.com).
### Test single sign-on
active-directory Wizergosproductivitysoftware Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/wizergosproductivitysoftware-tutorial.md
To configure Azure AD single sign-on with Wizergos Productivity Software, perfor
a. Click **UPLOAD** button to upload the downloaded certificate from Azure AD.
- b. In the **Issuer URL** textbox, paste the **Azure AD Identifier** value which you have copied from Azure portal.
+ b. In the **Issuer URL** textbox, paste the **Azure AD Identifier** value that you copied from the Azure portal.
- c. In the **Single Sign-On URL** textbox, paste the **Login URL** value which you have copied from Azure portal.
+ c. In the **Single Sign-On URL** textbox, paste the **Login URL** value that you copied from the Azure portal.
- d. In the **Single Sign-Out URL** textbox, paste the **Logout URL** value which you have copied from Azure portal.
+ d. In the **Single Sign-Out URL** textbox, paste the **Logout URL** value that you copied from Azure portal.
e. Click **Save** button.
In this section, you enable Britta Simon to use Azure single sign-on by granting
### Create Wizergos Productivity Software test user
-In this section, you create a user called Britta Simon in Wizergos Productivity Software. Work with [Wizergos Productivity Software support team](mailTo:support@wizergos.com) to add the users in the Wizergos Productivity Software platform.
+In this section, you create a user called Britta Simon in Wizergos Productivity Software. Work with [Wizergos Productivity Software support team](mailto:support@wizergos.com) to add the users in the Wizergos Productivity Software platform.
### Test single sign-on
active-directory Introduction To Verifiable Credentials Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/introduction-to-verifiable-credentials-architecture.md
> [!IMPORTANT] > Azure Active Directory Verifiable Credentials is currently in public preview. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-ItΓÇÖs important to plan your verifiable credential solution so that in addition to issuing and or validating credentials, you have a complete view of the architectural and business impacts of your solution. If you havenΓÇÖt reviewed them already, we recommend you review [Introduction to Azure Active Directory Verifiable Credentials](decentralized-identifier-overview.md) and the[ FAQs](verifiable-credentials-faq.md), and then complete the [Getting Started](get-started-verifiable-credentials.md) tutorial.
+ItΓÇÖs important to plan your verifiable credential solution so that in addition to issuing and or validating credentials, you have a complete view of the architectural and business impacts of your solution. If you havenΓÇÖt reviewed them already, we recommend you review [Introduction to Azure Active Directory Verifiable Credentials](decentralized-identifier-overview.md) and the [FAQs](verifiable-credentials-faq.md), and then complete the [Getting Started](get-started-verifiable-credentials.md) tutorial.
This architectural overview introduces the capabilities and components of the Azure Active Directory Verifiable Credentials service. For more detailed information on issuance and validation, see
aks Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files.md
For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Se
[aks-nfs]: azure-nfs-volume.md [anf]: ../azure-netapp-files/azure-netapp-files-introduction.md [anf-delegate-subnet]: ../azure-netapp-files/azure-netapp-files-delegate-subnet.md
-[anf-quickstart]: ../azure-netapp-files/
[anf-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=all [anf-waitlist]: https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR8cq17Xv9yVBtRCSlcD_gdVUNUpUWEpLNERIM1NOVzA5MzczQ0dQR1ZTSS4u [az-aks-show]: /cli/azure/aks#az_aks_show
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
[taints-tolerations]: operator-best-practices-advanced-scheduler.md#provide-dedicated-nodes-using-taints-and-tolerations [vm-sizes]: ../virtual-machines/sizes.md [use-system-pool]: use-system-pools.md
-[ip-limitations]: ../virtual-network/virtual-network-ip-addresses-overview-arm#standard
[node-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks [vmss-commands]: ../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md#public-ipv4-per-virtual-machine [az-list-ips]: /cli/azure/vmss#az_vmss_list_instance_public_ips
api-management Api Management Troubleshoot Cannot Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-troubleshoot-cannot-add-custom-domain.md
The API Management service does not have permission to access the key vault that
To resolve this issue, follow these steps:
-1. Go to the [Azure portal](Https://portal.azure.com), select your API Management instance, and then select **Managed identities**. Make sure that the **Register with Azure Active Directory** option is set to **Yes**.
+1. Go to the [Azure portal](https://portal.azure.com), select your API Management instance, and then select **Managed identities**. Make sure that the **Register with Azure Active Directory** option is set to **Yes**.
![Registering with Azure Active Director](./media/api-management-troubleshoot-cannot-add-custom-domain/register-with-aad.png) 1. In the Azure portal, open the **Key vaults** service, and select the key vault that you're trying to use for the custom domain. 1. Select **Access policies**, and check whether there is a service principal that matches the name of the API Management service instance. If there is, select the service principal, and make sure that it has the **Get** permission listed under **Secret permissions**.
api-management Compute Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/compute-infrastructure.md
To find the `platformVersion` property in the portal:
1. In **API version**, select a current version such as `2021-08-01` or later. 1. In the JSON view, scroll down to find the `platformVersion` property.
- :::image type="content" source="media/compute-infrastructure/platformversion property.png" alt-text="platformVersion property in JSON view":::
+ :::image type="content" source="media/compute-infrastructure/platformversion-property.png" alt-text="platformVersion property in JSON view":::
## How do I migrate to the `stv2` platform?
api-management Websocket Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/websocket-api.md
Below are the current restrictions of WebSocket support in API Management:
* Azure CLI, PowerShell, and SDK currently do not support management operations of WebSocket APIs. * 200 active connections limit per unit. * Websockets APIs support the following valid buffer types for messages: Close, BinaryFragment, BinaryMessage, UTF8Fragment, and UTF8Message.
+* Currently, the set header policy doesn't support changing certain well-known headers, including `Host` headers, in onHandshake requests.
### Unsupported policies
application-gateway How To Troubleshoot Application Gateway Session Affinity Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/how-to-troubleshoot-application-gateway-session-affinity-issues.md
Web debugging tools like Fiddler, can help you debug web applications by capturi
Use the web debugger of your choice. In this sample we will use Fiddler to capture and analyze http or https traffics, follow the instructions:
-1. Download the Fiddler tool at <https://www.telerik.com/download/fiddler>.
+1. Download [Fiddler](https://www.telerik.com/download/fiddler).
> [!NOTE] > Choose Fiddler4 if the capturing computer has .NET 4 installed. Otherwise, choose Fiddler2. 2. Right click the setup executable, and run as administrator to install.
- ![Screenshot shows the Fiddler tool setup program with a contextual menu with Run as administrator selected.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-12.png)
+ ![Screenshot shows the Fiddler setup program with a contextual menu with Run as administrator selected.](./media/how-to-troubleshoot-application-gateway-session-affinity-issues/troubleshoot-session-affinity-issues-12.png)
3. When you open Fiddler, it should automatically start capturing traffic (notice the Capturing at lower-left-hand corner). Press F12 to start or stop traffic capture.
application-gateway Tutorial Url Route Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-route-powershell.md
At this point, you have an application gateway that listens for traffic on port
### Add image and video backend pools and port
-Add backend pools named *imagesBackendPool* and *videoBackendPool* to your application gateway[Add-AzApplicationGatewayBackendAddressPool](/powershell/module/az.network/add-azapplicationgatewaybackendaddresspool). Add the frontend port for the pools using [Add-AzApplicationGatewayFrontendPort](/powershell/module/az.network/add-azapplicationgatewayfrontendport). Submit the changes to the application gateway using [Set-AzApplicationGateway](/powershell/module/az.network/set-azapplicationgateway).
+Add backend pools named *imagesBackendPool* and *videoBackendPool* to your application gateway using [Add-AzApplicationGatewayBackendAddressPool](/powershell/module/az.network/add-azapplicationgatewaybackendaddresspool). Add the frontend port for the pools using [Add-AzApplicationGatewayFrontendPort](/powershell/module/az.network/add-azapplicationgatewayfrontendport). Submit the changes to the application gateway using [Set-AzApplicationGateway](/powershell/module/az.network/set-azapplicationgateway).
```azurepowershell-interactive $appgw = Get-AzApplicationGateway `
applied-ai-services Applied Ai Services Customer Spotlight Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/applied-ai-services-customer-spotlight-use-cases.md
Title: Customer spotlight on use cases
-description: Customer spotlight on use cases
+description: Learn about customers who have used Azure Applied AI Services. See use cases that improve customer experience, streamline processes, and protect data.
- Previously updated : 05/13/2021+ Last updated : 06/02/2022 + # Customer spotlight on use cases
-Customers are already using Applied AI Services to add AI horsepower to their business scenarios.
+Customers are using Azure Applied AI Services to add AI horsepower to their business scenarios:
+
+- Chatbots improve customer experience in a robust, expandable way.
+- AI-driven search offers strong data security and delivers smart results that add value.
+- Azure Form Recognizer increases ROI by using automation to streamline data extraction.
| Partner | Description | Customer story | ||-|-|
-| <center>![Progressive_Logo](./media/logo-progressive-02.png) | **Progressive helps customers make smarter insurance decisions with Bot Service and Cognitive Search.** <br>"One of the great things about Bot Service is that, out of the box, we could use it to quickly put together the basic framework for our bot." *-Matt White, Marketing Manager, Personal Lines Acquisition Experience, Progressive Insurance* | [Read the story](https://customers.microsoft.com/story/789698-progressive-insurance-cognitive-services-insurance) |
-| <center>![Wix Logo](./media/wix-logo-01.png) | **WIX deploys smart search across 150 million websites with Cognitive Search** <br> ΓÇ£We really benefitted from choosing Azure Cognitive Search because we could go to market faster than we had with other products. We donΓÇÖt have to manage infrastructure, and our developers can spend time on higher-value tasks.ΓÇ¥*-Giedrius Gra┼╛evi─ìius: Project Manager for Search, Wix* | [Read the story](https://customers.microsoft.com/story/764974-wix-partner-professional-services-azure-cognitive-search) |
-| <center>![Chevron logo](./media/chevron-01.png) | **Chevron uses Form Recognizer to extract volumes of data from unstructured reports**<br>ΓÇ£We only have a finite amount of time to extract data, and oftentimes the data thatΓÇÖs left behind is valuable. With this new technology, we're able to extract everything and then decide what we can use to improve our performance.ΓÇ¥*-Diane Cillis, Engineering Technologist, Chevron Canada* | [Read the story](https://customers.microsoft.com/story/chevron-mining-oil-gas-azure-cognitive-services) |
-
+| <center>![Logo of Progressive Insurance, which consists of the word progressive in a slanted font in blue, capital letters.](./media/logo-progressive-02.png) | **Progressive uses Azure Bot Service and Azure Cognitive Search to help customers make smarter insurance decisions.** <br>"One of the great things about Bot Service is that, out of the box, we could use it to quickly put together the basic framework for our bot." *-Matt White, Marketing Manager, Personal Lines Acquisition Experience, Progressive Insurance* | [Insurance shoppers gain new service channel with artificial intelligence chatbot](https://customers.microsoft.com/story/789698-progressive-insurance-cognitive-services-insurance) |
+| <center>![Logo of Wix, which consists of the name Wix in a dary-gray font in lowercase letters.](./media/wix-logo-01.png) | **WIX uses Cognitive Search to deploy smart search across 150 million websites.** <br> "We really benefitted from choosing Azure Cognitive Search because we could go to market faster than we had with other products. We don't have to manage infrastructure, and our developers can spend time on higher-value tasks."*-Giedrius Gra┼╛evi─ìius: Project Manager for Search, Wix* | [Wix deploys smart, scalable search across 150 million websites with Azure Cognitive Search](https://customers.microsoft.com/story/764974-wix-partner-professional-services-azure-cognitive-search) |
+| <center>![Logo of Chevron. The name Chevron appears above two vertically stacked chevrons that point downward. The top one is blue, and the lower one is red.](./media/chevron-01.png) | **Chevron uses Form Recognizer to extract volumes of data from unstructured reports.**<br>"We only have a finite amount of time to extract data, and oftentimes the data that's left behind is valuable. With this new technology, we're able to extract everything and then decide what we can use to improve our performance."*-Diane Cillis, Engineering Technologist, Chevron Canada* | [Chevron is using AI-powered robotic process automation to extract volumes of data from unstructured reports for analysis](https://customers.microsoft.com/story/chevron-mining-oil-gas-azure-cognitive-services) |
## See also
-* [What are Applied AI Services?](what-are-applied-ai-services.md)
-* [Why use Applied AI Services?](why-applied-ai-services.md)
+
+- [What are Applied AI Services?](what-are-applied-ai-services.md)
+- [Why use Applied AI Services?](why-applied-ai-services.md)
ΓÇïΓÇï
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
This section helps you decide which Form Recognizer v3.0 supported feature you s
|<ul><li>**General structured document**</li></yl>| Is your document mostly structured and does it contain a few fields and values that may not be covered by the other prebuilt models?|<ul><li>If **Yes**, use the [**General document (preview)**](concept-general-document.md) model.</li><li> If **No**, because the fields and values are complex and highly variable, train and build a [**Custom**](how-to-guides/build-custom-model-v3.md) model.</li></ul> |<ul><li>**Invoice**</li></yl>| Is your invoice document composed in a [supported language](language-support.md#invoice-model) text?|<ul><li>If **Yes**, use the [**Invoice**](concept-invoice.md) model.<li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul> |<ul><li>**Receipt**</li><li>**Business card**</li></ul>| Is your receipt or business card document composed in English text? | <ul><li>If **Yes**, use the [**Receipt**](concept-receipt.md) or [**Business Card**](concept-business-card.md) model.</li><li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model.</li></ul>|
-|<ul><li>**ID document**</li></ul>| Is your ID document a US driver's license or an international passport?| <ul><li>If **Yes**, use the [**ID document**](concept-id-document.md) model.</li><li>If **No**, use the[**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model</li></ul>|
+|<ul><li>**ID document**</li></ul>| Is your ID document a US driver's license or an international passport?| <ul><li>If **Yes**, use the [**ID document**](concept-id-document.md) model.</li><li>If **No**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md) model</li></ul>|
|<ul><li>**Form** or **Document**</li></ul>| Is your form or document an industry-standard format commonly used in your business or industry?| <ul><li>If **Yes**, use the [**Layout**](concept-layout.md) or [**General document (preview)**](concept-general-document.md).</li><li>If **No**, you can [**Train and build a custom model**](quickstarts/try-sample-label-tool.md#train-a-custom-form-model). ## Form Recognizer features and development options
applied-ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-sample-label-tool.md
Train a custom model to analyze and extract data from forms and documents specif
1. Start by creating a new CORS entry in the Blob service.
- 1. Set the **Allowed origins** to **<https://fott-2-1.azurewebsites.net>**.
+ 1. Set the **Allowed origins** to `https://fott-2-1.azurewebsites.net`.
:::image type="content" source="../media/quickstarts/storage-cors-example.png" alt-text="Screenshot that shows CORS configuration for a storage account.":::
applied-ai-services Try V3 Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-csharp-sdk.md
Previously updated : 06/13/2022 Last updated : 06/22/2022 recommendations: false
DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), cr
//sample form document
-Uri fileUri = new Uri ("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf");
+Uri fileUri = new Uri("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf");
AnalyzeDocumentOperation operation = await client.StartAnalyzeDocumentFromUriAsync("prebuilt-document", fileUri);
foreach (DocumentKeyValuePair kvp in result.KeyValuePairs)
} }
-Console.WriteLine("Detected entities:");
-
-foreach (DocumentEntity entity in result.Entities)
-{
- if (entity.SubCategory == null)
- {
- Console.WriteLine($" Found entity '{entity.Content}' with category '{entity.Category}'.");
- }
- else
- {
- Console.WriteLine($" Found entity '{entity.Content}' with category '{entity.Category}' and sub-category '{entity.SubCategory}'.");
- }
-}
- foreach (DocumentPage page in result.Pages) { Console.WriteLine($"Document Page {page.PageNumber} has {page.Lines.Count} line(s), {page.Words.Count} word(s),");
foreach (DocumentPage page in result.Pages)
Console.WriteLine($" Line {i} has content: '{line.Content}'."); Console.WriteLine($" Its bounding box is:");
- Console.WriteLine($" Upper left => X: {line.BoundingBox[0].X}, Y= {line.BoundingBox[0].Y}");
- Console.WriteLine($" Upper right => X: {line.BoundingBox[1].X}, Y= {line.BoundingBox[1].Y}");
- Console.WriteLine($" Lower right => X: {line.BoundingBox[2].X}, Y= {line.BoundingBox[2].Y}");
- Console.WriteLine($" Lower left => X: {line.BoundingBox[3].X}, Y= {line.BoundingBox[3].Y}");
+ Console.WriteLine($" Upper left => X: {line.BoundingPolygon[0].X}, Y= {line.BoundingPolygon[0].Y}");
+ Console.WriteLine($" Upper right => X: {line.BoundingPolygon[1].X}, Y= {line.BoundingPolygon[1].Y}");
+ Console.WriteLine($" Lower right => X: {line.BoundingPolygon[2].X}, Y= {line.BoundingPolygon[2].Y}");
+ Console.WriteLine($" Lower left => X: {line.BoundingPolygon[3].X}, Y= {line.BoundingPolygon[3].Y}");
} for (int i = 0; i < page.SelectionMarks.Count; i++)
foreach (DocumentPage page in result.Pages)
Console.WriteLine($" Selection Mark {i} is {selectionMark.State}."); Console.WriteLine($" Its bounding box is:");
- Console.WriteLine($" Upper left => X: {selectionMark.BoundingBox[0].X}, Y= {selectionMark.BoundingBox[0].Y}");
- Console.WriteLine($" Upper right => X: {selectionMark.BoundingBox[1].X}, Y= {selectionMark.BoundingBox[1].Y}");
- Console.WriteLine($" Lower right => X: {selectionMark.BoundingBox[2].X}, Y= {selectionMark.BoundingBox[2].Y}");
- Console.WriteLine($" Lower left => X: {selectionMark.BoundingBox[3].X}, Y= {selectionMark.BoundingBox[3].Y}");
+ Console.WriteLine($" Upper left => X: {selectionMark.BoundingPolygon[0].X}, Y= {selectionMark.BoundingPolygon[0].Y}");
+ Console.WriteLine($" Upper right => X: {selectionMark.BoundingPolygon[1].X}, Y= {selectionMark.BoundingPolygon[1].Y}");
+ Console.WriteLine($" Lower right => X: {selectionMark.BoundingPolygon[2].X}, Y= {selectionMark.BoundingPolygon[2].Y}");
+ Console.WriteLine($" Lower left => X: {selectionMark.BoundingPolygon[3].X}, Y= {selectionMark.BoundingPolygon[3].Y}");
} }
AzureKeyCredential credential = new AzureKeyCredential(key);
DocumentAnalysisClient client = new DocumentAnalysisClient(new Uri(endpoint), credential); //sample document
+// sample form document
Uri fileUri = new Uri ("https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"); AnalyzeDocumentOperation operation = await client.StartAnalyzeDocumentFromUriAsync("prebuilt-layout", fileUri);
foreach (DocumentPage page in result.Pages)
Console.WriteLine($" Line {i} has content: '{line.Content}'."); Console.WriteLine($" Its bounding box is:");
- Console.WriteLine($" Upper left => X: {line.BoundingBox[0].X}, Y= {line.BoundingBox[0].Y}");
- Console.WriteLine($" Upper right => X: {line.BoundingBox[1].X}, Y= {line.BoundingBox[1].Y}");
- Console.WriteLine($" Lower right => X: {line.BoundingBox[2].X}, Y= {line.BoundingBox[2].Y}");
- Console.WriteLine($" Lower left => X: {line.BoundingBox[3].X}, Y= {line.BoundingBox[3].Y}");
+ Console.WriteLine($" Upper left => X: {line.BoundingPolygon[0].X}, Y= {line.BoundingPolygon[0].Y}");
+ Console.WriteLine($" Upper right => X: {line.BoundingPolygon[1].X}, Y= {line.BoundingPolygon[1].Y}");
+ Console.WriteLine($" Lower right => X: {line.BoundingPolygon[2].X}, Y= {line.BoundingPolygon[2].Y}");
+ Console.WriteLine($" Lower left => X: {line.BoundingPolygon[3].X}, Y= {line.BoundingPolygon[3].Y}");
} for (int i = 0; i < page.SelectionMarks.Count; i++)
foreach (DocumentPage page in result.Pages)
Console.WriteLine($" Selection Mark {i} is {selectionMark.State}."); Console.WriteLine($" Its bounding box is:");
- Console.WriteLine($" Upper left => X: {selectionMark.BoundingBox[0].X}, Y= {selectionMark.BoundingBox[0].Y}");
- Console.WriteLine($" Upper right => X: {selectionMark.BoundingBox[1].X}, Y= {selectionMark.BoundingBox[1].Y}");
- Console.WriteLine($" Lower right => X: {selectionMark.BoundingBox[2].X}, Y= {selectionMark.BoundingBox[2].Y}");
- Console.WriteLine($" Lower left => X: {selectionMark.BoundingBox[3].X}, Y= {selectionMark.BoundingBox[3].Y}");
+ Console.WriteLine($" Upper left => X: {selectionMark.BoundingPolygon[0].X}, Y= {selectionMark.BoundingPolygon[0].Y}");
+ Console.WriteLine($" Upper right => X: {selectionMark.BoundingPolygon[1].X}, Y= {selectionMark.BoundingPolygon[1].Y}");
+ Console.WriteLine($" Lower right => X: {selectionMark.BoundingPolygon[2].X}, Y= {selectionMark.BoundingPolygon[2].Y}");
+ Console.WriteLine($" Lower left => X: {selectionMark.BoundingPolygon[3].X}, Y= {selectionMark.BoundingPolygon[3].Y}");
} }
applied-ai-services Try V3 Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-form-recognizer-studio.md
Prebuilt models help you add Form Recognizer features to your apps without havin
* [**ID document**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument): extract text and key information from driver licenses and international passports. * [**Business card**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard): extract text and key information from business cards.
-After you've completed the prerequisites, navigate to [Form Recognizer Studio General Documents](https://formrecognizer.appliedai.azure.com).
+After you've completed the prerequisites, navigate to [Form Recognizer Studio General Documents](https://formrecognizer.appliedai.azure.com/studio/document).
In the following example, we use the General Documents feature. The steps to use other pre-trained features like [W2 tax form](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2), [Read](https://formrecognizer.appliedai.azure.com/studio/read), [Layout](https://formrecognizer.appliedai.azure.com/studio/layout), [Invoice](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice), [Receipt](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt), [Business card](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard), and [ID documents](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument) models are similar.
A **standard performance** [**Azure Blob Storage account**](https://portal.azure
1. Start by creating a new CORS entry in the Blob service.
-1. Set the **Allowed origins** to **<https://formrecognizer.appliedai.azure.com>**.
+1. Set the **Allowed origins** to `https://formrecognizer.appliedai.azure.com`.
:::image type="content" source="../media/quickstarts/cors-updated-image.png" alt-text="Screenshot that shows CORS configuration for a storage account.":::
applied-ai-services Try V3 Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-java-sdk.md
Previously updated : 06/13/2022 Last updated : 06/22/2022 recommendations: false
In this quickstart you'll use following features to analyze and extract data and
```console mkdir form-recognizer-app && form-recognizer-app ```
-
+ ```powershell mkdir translator-text-app; cd translator-text-app ```
Extract text, tables, structure, key-value pairs, and named entities from docume
public static void main(String[] args) {
- // create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
- DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
- .credential(new AzureKeyCredential(key))
- .endpoint(endpoint)
- .buildClient();
-
- // sample document
- String documentUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf";
- String modelId = "prebuilt-document";
- SyncPoller < DocumentOperationResult, AnalyzeResult> analyzeDocumentPoller =
- client.beginAnalyzeDocumentFromUrl(modelId, documentUrl);
-
- AnalyzeResult analyzeResult = analyzeDocumentPoller.getFinalResult();
-
- // pages
- analyzeResult.getPages().forEach(documentPage -> {
- System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
- documentPage.getWidth(),
- documentPage.getHeight(),
- documentPage.getUnit());
-
- // lines
- documentPage.getLines().forEach(documentLine ->
- System.out.printf("Line %s is within a bounding box %s.%n",
- documentLine.getContent(),
- documentLine.getBoundingBox().toString()));
-
- // words
- documentPage.getWords().forEach(documentWord ->
- System.out.printf("Word %s has a confidence score of %.2f%n.",
- documentWord.getContent(),
- documentWord.getConfidence()));
- });
-
- // tables
- List <DocumentTable> tables = analyzeResult.getTables();
- for (int i = 0; i < tables.size(); i++) {
- DocumentTable documentTable = tables.get(i);
- System.out.printf("Table %d has %d rows and %d columns.%n", i, documentTable.getRowCount(),
- documentTable.getColumnCount());
- documentTable.getCells().forEach(documentTableCell -> {
- System.out.printf("Cell '%s', has row index %d and column index %d.%n",
- documentTableCell.getContent(),
- documentTableCell.getRowIndex(), documentTableCell.getColumnIndex());
- });
- System.out.println();
- }
-
- // Entities
- analyzeResult.getEntities().forEach(documentEntity -> {
- System.out.printf("Entity category : %s, sub-category %s%n: ",
- documentEntity.getCategory(), documentEntity.getSubCategory());
- System.out.printf("Entity content: %s%n: ", documentEntity.getContent());
- System.out.printf("Entity confidence: %.2f%n", documentEntity.getConfidence());
- });
-
- // Key-value pairs
- analyzeResult.getKeyValuePairs().forEach(documentKeyValuePair -> {
- System.out.printf("Key content: %s%n", documentKeyValuePair.getKey().getContent());
- System.out.printf("Key content bounding region: %s%n",
- documentKeyValuePair.getKey().getBoundingRegions().toString());
-
- if (documentKeyValuePair.getValue() != null) {
- System.out.printf("Value content: %s%n", documentKeyValuePair.getValue().getContent());
- System.out.printf("Value content bounding region: %s%n", documentKeyValuePair.getValue().getBoundingRegions().toString());
- }
+ // create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+ DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
+ .credential(new AzureKeyCredential(key))
+ .endpoint(endpoint)
+ .buildClient();
+
+ // sample document
+ String documentUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf";
+ String modelId = "prebuilt-document";
+ SyncPoller < DocumentOperationResult, AnalyzeResult> analyzeDocumentPoller =
+ client.beginAnalyzeDocumentFromUrl(modelId, documentUrl);
+
+ AnalyzeResult analyzeResult = analyzeDocumentPoller.getFinalResult();
+
+ // pages
+ analyzeResult.getPages().forEach(documentPage -> {
+ System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
+ documentPage.getWidth(),
+ documentPage.getHeight(),
+ documentPage.getUnit());
+
+ // lines
+ documentPage.getLines().forEach(documentLine ->
+ System.out.printf("Line %s is within a bounding polygon %s.%n",
+ documentLine.getContent(),
+ documentLine.getBoundingPolygon().toString()));
+
+ // words
+ documentPage.getWords().forEach(documentWord ->
+ System.out.printf("Word %s has a confidence score of %.2f%n.",
+ documentWord.getContent(),
+ documentWord.getConfidence()));
+ });
+
+ // tables
+ List <DocumentTable> tables = analyzeResult.getTables();
+ for (int i = 0; i < tables.size(); i++) {
+ DocumentTable documentTable = tables.get(i);
+ System.out.printf("Table %d has %d rows and %d columns.%n", i, documentTable.getRowCount(),
+ documentTable.getColumnCount());
+ documentTable.getCells().forEach(documentTableCell -> {
+ System.out.printf("Cell '%s', has row index %d and column index %d.%n",
+ documentTableCell.getContent(),
+ documentTableCell.getRowIndex(), documentTableCell.getColumnIndex());
});
+ System.out.println();
}+
+ // Key-value pairs
+ analyzeResult.getKeyValuePairs().forEach(documentKeyValuePair -> {
+ System.out.printf("Key content: %s%n", documentKeyValuePair.getKey().getContent());
+ System.out.printf("Key content bounding region: %s%n",
+ documentKeyValuePair.getKey().getBoundingRegions().toString());
+
+ if (documentKeyValuePair.getValue() != null) {
+ System.out.printf("Value content: %s%n", documentKeyValuePair.getValue().getContent());
+ System.out.printf("Value content bounding region: %s%n", documentKeyValuePair.getValue().getBoundingRegions().toString());
+ }
+ });
}
+}
``` <!-- markdownlint-disable MD036 -->
Extract text, selection marks, text styles, table structures, and bounding regio
private static final String endpoint = "<your-endpoint>"; private static final String key = "<your-key>";
- public static void main(String[] args) {
-
- // create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
- DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
- .credential(new AzureKeyCredential(key))
- .endpoint(endpoint)
- .buildClient();
-
- // sample document
- String documentUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf";
-
- String modelId = "prebuilt-layout";
-
- SyncPoller < DocumentOperationResult, AnalyzeResult > analyzeLayoutResultPoller =
- client.beginAnalyzeDocumentFromUrl(modelId, documentUrl);
-
- AnalyzeResult analyzeLayoutResult = analyzeLayoutResultPoller.getFinalResult();
-
- // pages
- analyzeLayoutResult.getPages().forEach(documentPage -> {
- System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
- documentPage.getWidth(),
- documentPage.getHeight(),
- documentPage.getUnit());
-
- // lines
- documentPage.getLines().forEach(documentLine ->
- System.out.printf("Line %s is within a bounding box %s.%n",
- documentLine.getContent(),
- documentLine.getBoundingBox().toString()));
-
- // words
- documentPage.getWords().forEach(documentWord ->
- System.out.printf("Word '%s' has a confidence score of %.2f.%n",
- documentWord.getContent(),
- documentWord.getConfidence()));
-
- // selection marks
- documentPage.getSelectionMarks().forEach(documentSelectionMark ->
- System.out.printf("Selection mark is %s and is within a bounding box %s with confidence %.2f.%n",
- documentSelectionMark.getState().toString(),
- documentSelectionMark.getBoundingBox().toString(),
- documentSelectionMark.getConfidence()));
- });
+ public static void main(String[] args) {
- // tables
- List < DocumentTable > tables = analyzeLayoutResult.getTables();
- for (int i = 0; i < tables.size(); i++) {
- DocumentTable documentTable = tables.get(i);
- System.out.printf("Table %d has %d rows and %d columns.%n", i, documentTable.getRowCount(),
- documentTable.getColumnCount());
- documentTable.getCells().forEach(documentTableCell -> {
- System.out.printf("Cell '%s', has row index %d and column index %d.%n", documentTableCell.getContent(),
- documentTableCell.getRowIndex(), documentTableCell.getColumnIndex());
- });
- System.out.println();
- }
+ // create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+ DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
+ .credential(new AzureKeyCredential(key))
+ .endpoint(endpoint)
+ .buildClient();
+
+ // sample document
+ String documentUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf";
+ String modelId = "prebuilt-layout";
+
+ SyncPoller < DocumentOperationResult, AnalyzeResult > analyzeLayoutResultPoller =
+ client.beginAnalyzeDocumentFromUrl(modelId, documentUrl);
+
+ AnalyzeResult analyzeLayoutResult = analyzeLayoutResultPoller.getFinalResult();
+
+ // pages
+ analyzeLayoutResult.getPages().forEach(documentPage -> {
+ System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
+ documentPage.getWidth(),
+ documentPage.getHeight(),
+ documentPage.getUnit());
+
+ // lines
+ documentPage.getLines().forEach(documentLine ->
+ System.out.printf("Line %s is within a bounding polygon %s.%n",
+ documentLine.getContent(),
+ documentLine.getBoundingPolygon().toString()));
+
+ // words
+ documentPage.getWords().forEach(documentWord ->
+ System.out.printf("Word '%s' has a confidence score of %.2f%n",
+ documentWord.getContent(),
+ documentWord.getConfidence()));
+
+ // selection marks
+ documentPage.getSelectionMarks().forEach(documentSelectionMark ->
+ System.out.printf("Selection mark is %s and is within a bounding polygon %s with confidence %.2f.%n",
+ documentSelectionMark.getState().toString(),
+ documentSelectionMark.getBoundingPolygon().toString(),
+ documentSelectionMark.getConfidence()));
+ });
+
+ // tables
+ List < DocumentTable > tables = analyzeLayoutResult.getTables();
+ for (int i = 0; i < tables.size(); i++) {
+ DocumentTable documentTable = tables.get(i);
+ System.out.printf("Table %d has %d rows and %d columns.%n", i, documentTable.getRowCount(),
+ documentTable.getColumnCount());
+ documentTable.getCells().forEach(documentTableCell -> {
+ System.out.printf("Cell '%s', has row index %d and column index %d.%n", documentTableCell.getContent(),
+ documentTableCell.getRowIndex(), documentTableCell.getColumnIndex());
+ });
+ System.out.println();
} }+
+}
+ ``` **Build and run the application**
Analyze and extract common fields from specific document types using a prebuilt
// create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
- .credential(new AzureKeyCredential(key))
- .endpoint(endpoint)
- .buildClient();
+ .credential(new AzureKeyCredential(key))
+ .endpoint(endpoint)
+ .buildClient();
// sample document String invoiceUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf"; String modelId = "prebuilt-invoice";
- SyncPoller < DocumentOperationResult, AnalyzeResult > analyzeInvoicePoller = client.beginAnalyzeDocumentFromUrl(modelId, invoiceUrl);
+ SyncPoller<DocumentOperationResult, AnalyzeResult> analyzeInvoicePoller = client.beginAnalyzeDocumentFromUrl(modelId, invoiceUrl);
AnalyzeResult analyzeInvoiceResult = analyzeInvoicePoller.getFinalResult(); + for (int i = 0; i < analyzeInvoiceResult.getDocuments().size(); i++) {
- AnalyzedDocument analyzedInvoice = analyzeInvoiceResult.getDocuments().get(i);
- Map < String, DocumentField > invoiceFields = analyzedInvoice.getFields();
- System.out.printf("-- Analyzing invoice %d --%n", i);
- System.out.printf("Analyzed document has doc type %s with confidence : %.2f%n",
- analyzedInvoice.getDocType(), analyzedInvoice.getConfidence());
-
- DocumentField vendorNameField = invoiceFields.get("VendorName");
- if (vendorNameField != null) {
- if (DocumentFieldType.STRING == vendorNameField.getType()) {
- String merchantName = vendorNameField.getValueString();
- Float confidence = vendorNameField.getConfidence();
- System.out.printf("Vendor Name: %s, confidence: %.2f%n",
- merchantName, vendorNameField.getConfidence());
- }
- }
-
- DocumentField vendorAddressField = invoiceFields.get("VendorAddress");
- if (vendorAddressField != null) {
- if (DocumentFieldType.STRING == vendorAddressField.getType()) {
- String merchantAddress = vendorAddressField.getValueString();
- System.out.printf("Vendor address: %s, confidence: %.2f%n",
- merchantAddress, vendorAddressField.getConfidence());
- }
- }
-
- DocumentField customerNameField = invoiceFields.get("CustomerName");
- if (customerNameField != null) {
- if (DocumentFieldType.STRING == customerNameField.getType()) {
- String merchantAddress = customerNameField.getValueString();
- System.out.printf("Customer Name: %s, confidence: %.2f%n",
- merchantAddress, customerNameField.getConfidence());
- }
- }
-
- DocumentField customerAddressRecipientField = invoiceFields.get("CustomerAddressRecipient");
- if (customerAddressRecipientField != null) {
- if (DocumentFieldType.STRING == customerAddressRecipientField.getType()) {
- String customerAddr = customerAddressRecipientField.getValueString();
- System.out.printf("Customer Address Recipient: %s, confidence: %.2f%n",
- customerAddr, customerAddressRecipientField.getConfidence());
- }
- }
-
- DocumentField invoiceIdField = invoiceFields.get("InvoiceId");
- if (invoiceIdField != null) {
- if (DocumentFieldType.STRING == invoiceIdField.getType()) {
- String invoiceId = invoiceIdField.getValueString();
- System.out.printf("Invoice ID: %s, confidence: %.2f%n",
- invoiceId, invoiceIdField.getConfidence());
- }
- }
-
- DocumentField invoiceDateField = invoiceFields.get("InvoiceDate");
- if (customerNameField != null) {
- if (DocumentFieldType.DATE == invoiceDateField.getType()) {
- LocalDate invoiceDate = invoiceDateField.getValueDate();
- System.out.printf("Invoice Date: %s, confidence: %.2f%n",
- invoiceDate, invoiceDateField.getConfidence());
+ AnalyzedDocument analyzedInvoice = analyzeInvoiceResult.getDocuments().get(i);
+ Map<String, DocumentField> invoiceFields = analyzedInvoice.getFields();
+ System.out.printf("-- Analyzing invoice %d --%n", i);
+ System.out.printf("Analyzed document has doc type %s with confidence : %.2f%n",
+ analyzedInvoice.getDocType(), analyzedInvoice.getConfidence());
+
+ DocumentField vendorNameField = invoiceFields.get("VendorName");
+ if (vendorNameField != null) {
+ if (DocumentFieldType.STRING == vendorNameField.getType()) {
+ String merchantName = vendorNameField.getValueString();
+ Float confidence = vendorNameField.getConfidence();
+ System.out.printf("Vendor Name: %s, confidence: %.2f%n",
+ merchantName, vendorNameField.getConfidence());
+ }
}
- }
-
- DocumentField invoiceTotalField = invoiceFields.get("InvoiceTotal");
- if (customerAddressRecipientField != null) {
- if (DocumentFieldType.FLOAT == invoiceTotalField.getType()) {
- Float invoiceTotal = invoiceTotalField.getValueFloat();
- System.out.printf("Invoice Total: %.2f, confidence: %.2f%n",
- invoiceTotal, invoiceTotalField.getConfidence());
++
+ DocumentField vendorAddressField = invoiceFields.get("VendorAddress");
+ if (vendorAddressField != null) {
+ if (DocumentFieldType.STRING == vendorAddressField.getType()) {
+ String merchantAddress = vendorAddressField.getValueString();
+ System.out.printf("Vendor address: %s, confidence: %.2f%n",
+ merchantAddress, vendorAddressField.getConfidence());
+ }
}
- }
-
- DocumentField invoiceItemsField = invoiceFields.get("Items");
- if (invoiceItemsField != null) {
- System.out.printf("Invoice Items: %n");
- if (DocumentFieldType.LIST == invoiceItemsField.getType()) {
- List < DocumentField > invoiceItems = invoiceItemsField.getValueList();
- invoiceItems.stream()
- .filter(invoiceItem -> DocumentFieldType.MAP == invoiceItem.getType())
- .map(formField -> formField.getValueMap())
- .forEach(formFieldMap -> formFieldMap.forEach((key, formField) -> {
- // See a full list of fields found on an invoice here:
- // https://aka.ms/formrecognizer/invoicefields
- if ("Description".equals(key)) {
- if (DocumentFieldType.STRING == formField.getType()) {
- String name = formField.getValueString();
- System.out.printf("Description: %s, confidence: %.2fs%n",
- name, formField.getConfidence());
+
+ DocumentField customerNameField = invoiceFields.get("CustomerName");
+ if (customerNameField != null) {
+ if (DocumentFieldType.STRING == customerNameField.getType()) {
+ String merchantAddress = customerNameField.getValueString();
+ System.out.printf("Customer Name: %s, confidence: %.2f%n",
+ merchantAddress, customerNameField.getConfidence());
}
- }
- if ("Quantity".equals(key)) {
- if (DocumentFieldType.FLOAT == formField.getType()) {
- Float quantity = formField.getValueFloat();
- System.out.printf("Quantity: %f, confidence: %.2f%n",
- quantity, formField.getConfidence());
+ }
+
+ DocumentField customerAddressRecipientField = invoiceFields.get("CustomerAddressRecipient");
+ if (customerAddressRecipientField != null) {
+ if (DocumentFieldType.STRING == customerAddressRecipientField.getType()) {
+ String customerAddr = customerAddressRecipientField.getValueString();
+ System.out.printf("Customer Address Recipient: %s, confidence: %.2f%n",
+ customerAddr, customerAddressRecipientField.getConfidence());
}
- }
- if ("UnitPrice".equals(key)) {
- if (DocumentFieldType.FLOAT == formField.getType()) {
- Float unitPrice = formField.getValueFloat();
- System.out.printf("Unit Price: %f, confidence: %.2f%n",
- unitPrice, formField.getConfidence());
+ }
+
+ DocumentField invoiceIdField = invoiceFields.get("InvoiceId");
+ if (invoiceIdField != null) {
+ if (DocumentFieldType.STRING == invoiceIdField.getType()) {
+ String invoiceId = invoiceIdField.getValueString();
+ System.out.printf("Invoice ID: %s, confidence: %.2f%n",
+ invoiceId, invoiceIdField.getConfidence());
}
- }
- if ("ProductCode".equals(key)) {
- if (DocumentFieldType.FLOAT == formField.getType()) {
- Float productCode = formField.getValueFloat();
- System.out.printf("Product Code: %f, confidence: %.2f%n",
- productCode, formField.getConfidence());
+ }
+
+ DocumentField invoiceDateField = invoiceFields.get("InvoiceDate");
+ if (customerNameField != null) {
+ if (DocumentFieldType.DATE == invoiceDateField.getType()) {
+ LocalDate invoiceDate = invoiceDateField.getValueDate();
+ System.out.printf("Invoice Date: %s, confidence: %.2f%n",
+ invoiceDate, invoiceDateField.getConfidence());
}
- }
- }));
- }
- }
- }
- }
- }
+ }
+
+ DocumentField invoiceTotalField = invoiceFields.get("InvoiceTotal");
+ if (customerAddressRecipientField != null) {
+ if (DocumentFieldType.FLOAT == invoiceTotalField.getType()) {
+ Float invoiceTotal = invoiceTotalField.getValueFloat();
+ System.out.printf("Invoice Total: %.2f, confidence: %.2f%n",
+ invoiceTotal, invoiceTotalField.getConfidence());
+ }
+ }
+ DocumentField invoiceItemsField = invoiceFields.get("Items");
+ if (invoiceItemsField != null) {
+ System.out.printf("Invoice Items: %n");
+ if (DocumentFieldType.LIST == invoiceItemsField.getType()) {
+ List < DocumentField > invoiceItems = invoiceItemsField.getValueList();
+ invoiceItems.stream()
+ .filter(invoiceItem -> DocumentFieldType.MAP == invoiceItem.getType())
+ .map(formField -> formField.getValueMap())
+ .forEach(formFieldMap -> formFieldMap.forEach((key, formField) -> {
+ // See a full list of fields found on an invoice here:
+ // https://aka.ms/formrecognizer/invoicefields
+ if ("Description".equals(key)) {
+ if (DocumentFieldType.STRING == formField.getType()) {
+ String name = formField.getValueString();
+ System.out.printf("Description: %s, confidence: %.2fs%n",
+ name, formField.getConfidence());
+ }
+ }
+ if ("Quantity".equals(key)) {
+ if (DocumentFieldType.FLOAT == formField.getType()) {
+ Float quantity = formField.getValueFloat();
+ System.out.printf("Quantity: %f, confidence: %.2f%n",
+ quantity, formField.getConfidence());
+ }
+ }
+ if ("UnitPrice".equals(key)) {
+ if (DocumentFieldType.FLOAT == formField.getType()) {
+ Float unitPrice = formField.getValueFloat();
+ System.out.printf("Unit Price: %f, confidence: %.2f%n",
+ unitPrice, formField.getConfidence());
+ }
+ }
+ if ("ProductCode".equals(key)) {
+ if (DocumentFieldType.FLOAT == formField.getType()) {
+ Float productCode = formField.getValueFloat();
+ System.out.printf("Product Code: %f, confidence: %.2f%n",
+ productCode, formField.getConfidence());
+ }
+ }
+ }));
+ }
+ }
+ }
+ }
+ }
``` **Build and run the application**
applied-ai-services Try V3 Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-javascript-sdk.md
Previously updated : 06/13/2022 Last updated : 06/22/2022 recommendations: false
Extract text, tables, structure, key-value pairs, and named entities from docume
const formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf" async function main() {
- // create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
- const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(key));
-
- const poller = await client.beginAnalyzeDocument("prebuilt-document", formUrl);
-
- const {
- keyValuePairs,
- entities
- } = await poller.pollUntilDone();
+ const client = new DocumentAnalysisClient(endpoint, new AzureKeyCredential(key));
+
+ const poller = await client.beginAnalyzeDocument("prebuilt-document", formUrl);
+
+ const {
+ keyValuePairs,
+ entities
+ } = await poller.pollUntilDone();
+
+ if (keyValuePairs.length <= 0) {
+ console.log("No key-value pairs were extracted from the document.");
+ } else {
+ console.log("Key-Value Pairs:");
+ for (const {
+ key,
+ value,
+ confidence
+ } of keyValuePairs) {
+ console.log("- Key :", `"${key.content}"`);
+ console.log(" Value:", `"${value?.content ?? "<undefined>"}" (${confidence})`);
+ }
+ }
+
+}
+
+main().catch((error) => {
+ console.error("An error occurred:", error);
+ process.exit(1);
+});
- if (keyValuePairs.length <= 0) {
- console.log("No key-value pairs were extracted from the document.");
- } else {
- console.log("Key-Value Pairs:");
- for (const {
- key,
- value,
- confidence
- } of keyValuePairs) {
- console.log("- Key :", `"${key.content}"`);
- console.log(" Value:", `"${value?.content ?? "<undefined>"}" (${confidence})`);
- }
- }
-
- if (entities.length <= 0) {
- console.log("No entities were extracted from the document.");
- } else {
- console.log("Entities:");
- for (const entity of entities) {
- console.log(
- `- "${entity.content}" ${entity.category} - ${entity.subCategory ?? "<none>"} (${
- entity.confidence
- })`
- );
- }
- }
- }
-
- main().catch((error) => {
- console.error("An error occurred:", error);
- process.exit(1);
- });
``` **Run your application**
applied-ai-services Try V3 Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-python-sdk.md
Previously updated : 03/31/2022 Last updated : 06/23/2022 recommendations: false
key = "<your-key>"
def format_bounding_region(bounding_regions): if not bounding_regions: return "N/A"
- return ", ".join("Page #{}: {}".format(region.page_number, format_bounding_box(region.bounding_box)) for region in bounding_regions)
+ return ", ".join("Page #{}: {}".format(region.page_number, format_polygon(region.polygon)) for region in bounding_regions)
-def format_bounding_box(bounding_box):
- if not bounding_box:
+def format_polygon(polygon):
+ if not polygon:
return "N/A"
- return ", ".join(["[{}, {}]".format(p.x, p.y) for p in bounding_box])
+ return ", ".join(["[{}, {}]".format(p.x, p.y) for p in polygon])
def analyze_general_documents():
def analyze_general_documents():
) )
- print("-Entities found in document-")
- for entity in result.entities:
- print("Entity of category '{}' with sub-category '{}'".format(entity.category, entity.sub_category))
- print("...has content '{}'".format(entity.content))
- print("...within '{}' bounding regions".format(format_bounding_region(entity.bounding_regions)))
- print("...with confidence {}\n".format(entity.confidence))
- for page in result.pages: print("-Analyzing document from page #{}-".format(page.page_number)) print(
def analyze_general_documents():
"...Line # {} has text content '{}' within bounding box '{}'".format( line_idx, line.content,
- format_bounding_box(line.bounding_box),
+ format_polygon(line.polygon),
) )
def analyze_general_documents():
print( "...Selection mark is '{}' within bounding box '{}' and has a confidence of {}".format( selection_mark.state,
- format_bounding_box(selection_mark.bounding_box),
+ format_polygon(selection_mark.polygon),
selection_mark.confidence, ) )
def analyze_general_documents():
"Table # {} location on page: {} is {}".format( table_idx, region.page_number,
- format_bounding_box(region.bounding_box),
+ format_polygon(region.polygon),
) ) for cell in table.cells:
def analyze_general_documents():
print( "...content on page {} is within bounding box '{}'\n".format( region.page_number,
- format_bounding_box(region.bounding_box),
+ format_polygon(region.polygon),
) ) print("-")
from azure.core.credentials import AzureKeyCredential
endpoint = "<your-endpoint>" key = "<your-key>"
-def format_bounding_box(bounding_box):
- if not bounding_box:
+def format_polygon(polygon):
+ if not polygon:
return "N/A"
- return ", ".join(["[{}, {}]".format(p.x, p.y) for p in bounding_box])
+ return ", ".join(["[{}, {}]".format(p.x, p.y) for p in polygon])
def analyze_layout(): # sample form document formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"
- # create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
- document_analysis_client = DocumentAnalysisClient(endpoint=endpoint, credential=AzureKeyCredential(key))
+ document_analysis_client = DocumentAnalysisClient(
+ endpoint=endpoint, credential=AzureKeyCredential(key)
+ )
poller = document_analysis_client.begin_analyze_document_from_url( "prebuilt-layout", formUrl)
def analyze_layout():
line_idx, len(words), line.content,
- format_bounding_box(line.bounding_box),
+ format_polygon(line.polygon),
) )
def analyze_layout():
print( "...Selection mark is '{}' within bounding box '{}' and has a confidence of {}".format( selection_mark.state,
- format_bounding_box(selection_mark.bounding_box),
+ format_polygon(selection_mark.polygon),
selection_mark.confidence, ) )
def analyze_layout():
"Table # {} location on page: {} is {}".format( table_idx, region.page_number,
- format_bounding_box(region.bounding_box),
+ format_polygon(region.polygon),
) ) for cell in table.cells:
def analyze_layout():
print( "...content on page {} is within bounding box '{}'".format( region.page_number,
- format_bounding_box(region.bounding_box),
+ format_polygon(region.polygon),
) )
key = "<your-key>"
def format_bounding_region(bounding_regions): if not bounding_regions: return "N/A"
- return ", ".join("Page #{}: {}".format(region.page_number, format_bounding_box(region.bounding_box)) for region in bounding_regions)
+ return ", ".join("Page #{}: {}".format(region.page_number, format_polygon(region.polygon)) for region in bounding_regions)
-def format_bounding_box(bounding_box):
- if not bounding_box:
+def format_polygon(polygon):
+ if not polygon:
return "N/A"
- return ", ".join(["[{}, {}]".format(p.x, p.y) for p in bounding_box])
+ return ", ".join(["[{}, {}]".format(p.x, p.y) for p in polygon])
def analyze_invoice(): invoiceUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf"
- # create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
- document_analysis_client = DocumentAnalysisClient(endpoint=endpoint, credential=AzureKeyCredential(key))
+ document_analysis_client = DocumentAnalysisClient(
+ endpoint=endpoint, credential=AzureKeyCredential(key)
+ )
poller = document_analysis_client.begin_analyze_document_from_url( "prebuilt-invoice", invoiceUrl)
def analyze_invoice():
if __name__ == "__main__": analyze_invoice()+
+ print("-")
``` **Run the application**
automation Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/private-link-security.md
Before setting up your Automation account resource, consider your network isolat
### Connect to a private endpoint
-Create a private endpoint to connect our network. You can create it in the [Azure portal Private Link center](https://portal.azure.com/#blade/Microsoft_Azure_Network/PrivateLinkCenterBlade/privateendpoints). Once your changes to publicNetworkAccess and Private Link are applied, it can take up to 35 minutes for them to take effect.
+Follow the steps below to create a private endpoint for your Automation account.
-In this section, you'll create a private endpoint for your Automation account.
+1. Go to [Private Link center](https://portal.azure.com/#blade/Microsoft_Azure_Network/PrivateLinkCenterBlade/privateendpoints) in Azure portal to create a private endpoint to connect our network.
+Once your changes to public Network Access and Private Link are applied, it can take up to 35 minutes for them to take effect.
-1. On the upper-left side of the screen, select **Create a resource > Networking > Private Link Center**.
+1. On **Private Link Center**, select **Create private endpoint**.
-2. In **Private Link Center - Overview**, on the option to **Build a private connection to a service**, select **Start**.
+ :::image type="content" source="./media/private-link-security/create-private-endpoint.png" alt-text="Screenshot of how to create a private endpoint.":::
-3. In **Create a virtual machine - Basics**, enter or select the following information:
+1. On **Basics**, enter the following details:
+ - **Subscription**
+ - **Resource group**
+ - **Name**
+ - **Network Interface Name**
+ - **Region** and select **Next: Resource**.
- | Setting | Value |
- | - | -- |
- | **PROJECT DETAILS** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. You created this in the previous section. |
- | **INSTANCE DETAILS** | |
- | Name | Enter your *PrivateEndpoint*. |
- | Region | Select **YourRegion**. |
- |||
+ :::image type="content" source="./media/private-link-security/create-private-endpoint-basics.png" alt-text="Screenshot of how to create a private endpoint in Basics tab.":::
-4. Select **Next: Resource**.
+1. On **Resource**, enter the following details:
+ - **Connection method**, select the default option - *Connect to an Azure resource in my directory*.
+ - **Subscription**
+ - **Resource type**
+ - **Resource**.
+ - The **Target sub-resource** can either be *Webhook* or *DSCAndHybridWorker* as per your scenario and select **Next : Virtual Network**.
+
+ :::image type="content" source="./media/private-link-security/create-private-endpoint-resource-inline.png" alt-text="Screenshot of how to create a private endpoint in Resource tab." lightbox="./media/private-link-security/create-private-endpoint-resource-expanded.png":::
-5. In **Create a private endpoint - Resource**, enter or select the following information:
+1. On **Virtual Network**, enter the following details:
+ - **Virtual network**
+ - **Subnet**
+ - Enable the checkbox for **Enable network policies for all private endpoints in this subnet**.
+ - Select **Dynamically allocate IP address** and select **Next : DNS**.
- | Setting | Value |
- | - | -- |
- |Connection method | Select connect to an Azure resource in my directory.|
- | Subscription| Select your subscription. |
- | Resource type | Select **Microsoft.Automation/automationAccounts**. |
- | Resource |Select *myAutomationAccount*|
- |Target subresource |Select *Webhook* or *DSCAndHybridWorker* depending on your scenario.|
- |||
+ :::image type="content" source="./media/private-link-security/create-private-endpoint-virtual-network-inline.png" alt-text="Screenshot of how to create a private endpoint in Virtual network tab." lightbox="./media/private-link-security/create-private-endpoint-virtual-network-expanded.png":::
-6. Select **Next: Configuration**.
+1. On **DNS**, the data is populated as per the information entered in the **Basics**, **Resource**, **Virtual Network** tabs and it creates a Private DNS zone. Enter the following details:
+ - **Integrate with private DNS Zone**
+ - **Subscription**
+ - **Resource group** and select **Next : Tags**
-7. In **Create a private endpoint - Configuration**, enter or select the following information:
+ :::image type="content" source="./media/private-link-security/create-private-endpoint-dns-inline.png" alt-text="Screenshot of how to create a private endpoint in DNS tab." lightbox="./media/private-link-security/create-private-endpoint-dns-expanded.png":::
- | Setting | Value |
- | - | -- |
- |**NETWORKING**| |
- | Virtual network| Select *MyVirtualNetwork*. |
- | Subnet | Select *mySubnet*. |
- |**PRIVATE DNS INTEGRATION**||
- |Integrate with private DNS zone |Select **Yes**. |
- |Private DNS Zone |Select *(New)privatelink.azure-automation.net* |
- |||
+1. On **Tags**, you can categorize resources. Select **Name** and **Value** and select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-8. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-9. When you see the **Validation passed** message, select **Create**.
+On the **Private Link Center**, select **Private endpoints** to view your private link resource.
-In the **Private Link Center**, select **Private endpoints** to view your private link resource.
-
-![Automation resource private link](./media/private-link-security/private-link-automation-resource.png)
Select the resource to see all the details. This creates a new private endpoint for your Automation account and assigns it a private IP from your virtual network. The **Connection status** shows as **approved**.
automation Powershell Runbook Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/powershell-runbook-managed-identity.md
Remove-AzRoleAssignment `
## Next steps
-In this tutorial, you created a [PowerShell runbook](../automation-runbook-types.md#powershell-runbooks) in Azure Automation that used a[managed identity](../automation-security-overview.md#managed-identities), rather than the Run As account to interact with resources. For a look at PowerShell Workflow runbooks, see:
+In this tutorial, you created a [PowerShell runbook](../automation-runbook-types.md#powershell-runbooks) in Azure Automation that used a [managed identity](../automation-security-overview.md#managed-identities), rather than the Run As account to interact with resources. For a look at PowerShell Workflow runbooks, see:
> [!div class="nextstepaction"] > [Tutorial: Create a PowerShell Workflow runbook](automation-tutorial-runbook-textual.md)
automation Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new-archive.md
Manage Oracle Linux 6 and 7 machines with Automation State Configuration. See [S
**Type:** New feature
-Azure Automation now supports Python 3 cloud and hybrid runbook execution in public preview in all regions in Azure global cloud. For more information, see the [announcement]((https://azure.microsoft.com/updates/azure-automation-python-3-public-preview/).
+Azure Automation now supports Python 3 cloud and hybrid runbook execution in public preview in all regions in Azure global cloud. For more information, see the [announcement](https://azure.microsoft.com/updates/azure-automation-python-3-public-preview/).
## November 2020
avere-vfxt Avere Vfxt Additional Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-additional-resources.md
This article has links to additional documentation about the Avere Control Panel
## Avere cluster documentation
-Additional Avere cluster documentation can be found on the website at <https://azure.github.io/Avere/>. These documents can help you understand the cluster's capabilities and how to configure its settings.
+Additional Avere cluster documentation can be found on the [Avere website](https://azure.github.io/Avere/). These documents can help you understand the cluster's capabilities and how to configure its settings.
* The [FXT Cluster Creation Guide](https://azure.github.io/Avere/#fxt_cluster) is designed for clusters made up of physical hardware nodes, but some information in the document is relevant for vFXT clusters as well. In particular, new vFXT cluster administrators can benefit from reading these sections: * [Customizing Support and Monitoring Settings](https://azure.github.io/Avere/legacy/create_cluster/4_8/html/config_support.html#config-support) explains how to customize support upload settings and enable remote monitoring.
avere-vfxt Avere Vfxt Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-overview.md
Avere vFXT for Azure is best suited for these situations:
* Compute farms of 1000 to 40,000 CPU cores * Integration with on-premises hardware NAS, Azure Blob storage, or both
-For more information, visit <https://azure.microsoft.com/services/storage/avere-vfxt/>
+For more information, see [Avere vFXT for Azure](https://azure.microsoft.com/services/storage/avere-vfxt/).
## Who uses Avere vFXT for Azure?
avere-vfxt Avere Vfxt Prereqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/avere-vfxt/avere-vfxt-prereqs.md
This step only needs to be done once per subscription.
To accept the software terms in advance:
-1. Open a cloud shell in the Azure portal or by browsing to <https://shell.azure.com>. Sign in with your subscription ID.
+1. Use the [Azure Cloud Shell](https://shell.azure.com) to sign in using your subscription ID.
```azurecli az loginΓÇï
azure-app-configuration Quickstart Feature Flag Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-spring-boot.md
The Spring Boot Feature Management libraries extend the framework with comprehen
## Create a Spring Boot app
-Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boot project.
+To create a new Spring Boot project:
-1. Browse to <https://start.spring.io/>.
+1. Browse to the [Spring Initializr](https://start.spring.io).
1. Specify the following options:
azure-app-configuration Quickstart Java Spring App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-java-spring-app.md
In this quickstart, you incorporate Azure App Configuration into a Java Spring a
## Create a Spring Boot app
-Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boot project.
+To create a new Spring Boot project:
-1. Browse to <https://start.spring.io/>.
+1. Browse to the [Spring Initializr](https://start.spring.io).
1. Specify the following options:
azure-arc Storage Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/storage-configuration.md
When creating an instance using either `az sql mi-arc create` or `az postgres ar
|Parameter name, short name|Used for| |||
-|`--storage-class-data`, `-d`|Used to specify the storage class for all data files including transaction log files|
-|`--storage-class-logs`, `-g`|Used to specify the storage class for all log files|
-|`--storage-class-data-logs`|Used to specify the storage class for the database transaction log files.|
-|`--storage-class-backups`|Used to specify the storage class for all backup files. Use a ReadWriteMany (RWX) capable storage class for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). |
+|`--storage-class-data`, `-d`|Storage class for all data files (.mdf, ndf). If not specified, defaults to storage class for data controller.|
+|`--storage-class-logs`, `-g`|Storage class for all log files. If not specified, defaults to storage class for data controller.|
+|`--storage-class-data-logs`|Storage class for the database transaction log files. If not specified, defaults to storage class for data controller.|
+|`--storage-class-backups`|Storage class for all backup files. If not specified, defaults to storage class for data (`--storage-class-data`).<br/><br/> Use a ReadWriteMany (RWX) capable storage class for backups. Learn more about [access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes). |
> [!WARNING]
-> If you don't specify a storage class for backups, the deployment uses the default storage class in Kubernetes. If this storage class isn't RWX capable, the deployment may not succeed.
+> If you don't specify a storage class for backups, the deployment uses the storage class specified for data. If this storage class isn't RWX capable, the point-in-time restore may not work as desired.
The table below lists the paths inside the Azure SQL Managed Instance container that is mapped to the persistent volume for data and logs:
-|Parameter name, short name|Path inside mssql-miaa container|Description|
+|Parameter name, short name|Path inside `mssql-miaa` container|Description|
|||| |`--storage-class-data`, `-d`|/var/opt|Contains directories for the mssql installation and other system processes. The mssql directory contains default data (including transaction logs), error log & backup directories| |`--storage-class-logs`, `-g`|/var/log|Contains directories that store console output (stderr, stdout), other logging information of processes inside the container|
azure-arc Upload Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-metrics.md
Once your metrics are uploaded, you can view them from the Azure portal.
> Please note that it can take a couple of minutes for the uploaded data to be processed before you can view the metrics in the portal.
-To view your metrics in the portal, use this link to open the portal: <https://portal.azure.com>
-Then, search for your database instance by name in the search bar:
+To view your metrics, navigate to the [Azure portal](https://portal.azure.com). Then, search for your database instance by name in the search bar:
You can view CPU utilization on the Overview page or if you want more detailed metrics you can click on metrics from the left navigation panel
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
You should see output similar to the example below.
Next, specify the Azure Key Vault to use with your connected cluster. If you don't already have one, create a new Key Vault by using the following commands. Keep in mind that the name of your Key Vault must be globally unique.
-```azurecli
-az keyvault create -n $AZUREKEYVAULT_NAME -g $AKV_RESOURCE_GROUP -l $AZUREKEYVAULT_LOCATION
-Next, set the following environment variables:
+Set the following environment variables:
```azurecli-interactive export AKV_RESOURCE_GROUP=<resource-group-name> export AZUREKEYVAULT_NAME=<AKV-name> export AZUREKEYVAULT_LOCATION=<AKV-location> ```
+Next, run the following command
+
+```azurecli
+az keyvault create -n $AZUREKEYVAULT_NAME -g $AKV_RESOURCE_GROUP -l $AZUREKEYVAULT_LOCATION
+```
Azure Key Vault can store keys, secrets, and certificates. For this example, you can set a plain text secret called `DemoSecret` by using the following command:
Currently, the Secrets Store CSI Driver on Arc-enabled clusters can be accessed
apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata:
- name: akvprovider-demo
+ name: akvprovider-demo
spec: provider: azure parameters:
For more information about resolving common issues, see the open source troubles
## Next steps - Want to try things out? Get started quickly with an [Azure Arc Jumpstart scenario](https://aka.ms/arc-jumpstart-akv-secrets-provider) using Cluster API.-- Learn more about [Azure Key Vault](../../key-vault/general/overview.md).
+- Learn more about [Azure Key Vault](../../key-vault/general/overview.md).
azure-fluid-relay Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/concepts/authentication-authorization.md
The secret key is how the Azure Fluid Relay service knows that requests are comi
Azure Fluid Relay uses [JSON Web Tokens (JWTs)](https://jwt.io/) to encode and verify data signed with your secret key. JSON Web Tokens are a signed bit of JSON that can include additional information about rights and permissions. > [!NOTE]
-> The specifics of JWTs are beyond the scope of this article. For more details about the JWT standard see
-> <https://jwt.io/introduction>.
+> The specifics of JWTs are beyond the scope of this article. For more information about the JWT standard, see [Introduction to JSON Web Tokens](https://jwt.io/introduction).
Though the details of authentication differ between Fluid services, several values must always be present.
azure-fluid-relay Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/reference/service-limits.md
+
+ Title: Azure Fluid Relay limits
+description: Limits and throttles applied in Azure Fluid Relay.
+++ Last updated : 08/19/2021++++
+# Azure Fluid Relay Limits
+
+This article outlines known limitation of Azure Fluid Relay.
+
+## Distributed Data Structures
+
+The Azure Fluid Relay doesn't support [experimental distributed data structures (DDSes)](https://fluidframework.com/docs/data-structures/experimental/). These include but are not limited to DDS packages with the `@fluid-experimental` package namespace.
+
+## Fluid sessions
+
+The maximum number of simultaneous users in one session on Azure Fluid Relay is 100 users. This limit is on simultaneous users. What this means is that the 101st user won't be allowed to join the session. In the case where an existing user leaves the session, a new user will be able to join. This is because the number of simultaneous users at that point will be less than the limit.
+
+## Fluid Summaries
+
+Incremental summaries uploaded to Azure Fluid Relay can't exceed 28 MB in size. More info [here](https://fluidframework.com/docs/concepts/summarizer).
+
+## Signals
+
+Azure Fluid Relay doesn't currently have support for Signals. Learn about Signals [here](https://fluidframework.com/docs/concepts/signals/).
+
+## Need help?
+
+If you need help with any of the above limits or other Azure Fluid Relay topics, see the [support](../resources/support.md) options available to you.
azure-fluid-relay Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/resources/support.md
+
+ Title: Azure Fluid Relay support
+description: Help and support options for Azure Fluid Relay.
+++ Last updated : 08/19/2021++++
+# Help and Support options for Azure Fluid Relay
+
+If you have an issue or question involving Azure Fluid Relay, the following options are available.
+
+## Check out Frequently Asked Questions
+
+You can see if your question is already answered on our Frequently Asked Questions [page](faq.md).
+
+## Create an Azure Support Request
+
+With Azure, there are many [support options and plans](https://azure.microsoft.com/support/plans/) available, which you can explore and review. You can create a support ticket in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
+
+## Post a question to Microsoft Q&A
+
+For quick and reliable answers to product or technical questions you might have about Azure Fluid Relay from Microsoft Engineers, Azure Most Valuable Professionals (MVPs), or our community, engage with us on [Microsoft Q&A](/answers/products/azure).
+
+If you can't find an answer to your problem by searching you can, submit a new question to Microsoft Q&A. When creating a question, make sure to use the [Azure Fluid Relay tag](/answers/topics/azure-fluid-relay.html).
+
+## Post a question on Stack Overflow
+
+You can also try asking your question on Stack Overflow, which has a large community developer community and ecosystem. Azure Fluid Relay has a [dedicated tag](https://stackoverflow.com/questions/tagged/azure-fluid-relay) there too.
+
+## Fluid Framework
+
+For questions about the Fluid Framework, you can file issues on [GitHub](https://github.com/microsoft/fluidframework). The [Fluid Framework site](https://fluidframework.com) is another good source of information about the framework.
+
azure-functions Create First Function Arc Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-arc-custom-container.md
In Azure Functions, a function project is the context for one or more individual
cd LocalFunctionProj ```
- This folder contains the Dockerfile other files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). By default, the *local.settings.json* file is excluded from source control in the *.gitignore* file. This exclusion is because the file can contain secrets that are downloaded from Azure.
+ This folder contains the `Dockerfile` and other files for the project, including configurations files named [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md). By default, the *local.settings.json* file is excluded from source control in the *.gitignore* file. This exclusion is because the file can contain secrets that are downloaded from Azure.
1. Open the generated `Dockerfile` and locate the `3.0` tag for the base image. If there's a `3.0` tag, replace it with a `3.0.15885` tag. For example, in a JavaScript application, the Docker file should be modified to have `FROM mcr.microsoft.com/azure-functions/node:3.0.15885`. This version of the base image supports deployment to an Azure Arc-enabled Kubernetes cluster.
azure-functions Create Function App Linux App Service Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-function-app-linux-app-service-plan.md
Azure Functions lets you host your functions on Linux in a default Azure App Ser
## Sign in to Azure
-Sign in to the Azure portal at <https://portal.azure.com> with your Azure account.
+Sign in to the [Azure portal](https://portal.azure.com) using your Azure account.
## Create a function app
azure-functions Functions Deployment Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-slots.md
Azure Functions deployment slots allow your function app to run different instan
The following reflect how functions are affected by swapping slots: -- Traffic redirection is seamless; no requests are dropped because of a swap.-- If a function is running during a swap, execution continues and the next triggers are routed to the swapped app instance.
+- Traffic redirection is seamless; no requests are dropped because of a swap. This seamless behavior is a result of the next function triggers being routed to the swapped slot.
+- Currently executing function are terminated during the swap. Please review [Improve the performance and reliability of Azure Functions](performance-reliability.md#write-functions-to-be-stateless) to learn how to write stateless and defensive functions.
## Why use slots?
Some configuration settings are slot-specific. The following lists detail which
**Slot-specific settings**:
-* Publishing endpoints
-* Custom domain names
-* Non-public certificates and TLS/SSL settings
-* Scale settings
-* IP restrictions
-* Always On
-* Diagnostic settings
-* Cross-origin resource sharing (CORS)
+- Publishing endpoints
+- Custom domain names
+- Non-public certificates and TLS/SSL settings
+- Scale settings
+- IP restrictions
+- Always On
+- Diagnostic settings
+- Cross-origin resource sharing (CORS)
**Non slot-specific settings**:
-* General settings, such as framework version, 32/64-bit, web sockets
-* App settings (can be configured to stick to a slot)
-* Connection strings (can be configured to stick to a slot)
-* Handler mappings
-* Public certificates
-* Hybrid connections *
-* Virtual network integration *
-* Service endpoints *
-* Azure Content Delivery Network *
+- General settings, such as framework version, 32/64-bit, web sockets
+- App settings (can be configured to stick to a slot)
+- Connection strings (can be configured to stick to a slot)
+- Handler mappings
+- Public certificates
+- Hybrid connections *
+- Virtual network integration *
+- Service endpoints *
+- Azure Content Delivery Network *
-Features marked with an asterisk (*) are planned to be unswapped.
+Features marked with an asterisk (*) are planned to be unswapped.
> [!NOTE] > Certain app settings that apply to unswapped settings are also not swapped. For example, since diagnostic settings are not swapped, related app settings like `WEBSITE_HTTPLOGGING_RETENTION_DAYS` and `DIAGNOSTICS_AZUREBLOBRETENTIONDAYS` are also not swapped, even if they don't show up as slot settings.
You can swap slots via the [CLI](/cli/azure/functionapp/deployment/slot#az-funct
:::image type="content" source="./media/functions-deployment-slots/functions-swap-deployment-slot.png" alt-text="Screenshot that shows the 'Deployment slot' page with the 'Add Slot' action selected." border="true"::: 1. Verify the configuration settings for your swap and select **Swap**
-
+ :::image type="content" source="./media/functions-deployment-slots/azure-functions-deployment-slots-swap-config.png" alt-text="Swap the deployment slot." border="true"::: The operation may take a moment while the swap operation is executing.
azure-functions Functions Deployment Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-technologies.md
Last updated 05/18/2022
# Deployment technologies in Azure Functions
-You can use a few different technologies to deploy your Azure Functions project code to Azure. This article provides an overview of the deployment methods available to you and recommendations for the best method to use in various scenarios. It also provides an exhaustive list of and key details about the underlying deployment technologies.
+You can use a few different technologies to deploy your Azure Functions project code to Azure. This article provides an overview of the deployment methods available to you and recommendations for the best method to use in various scenarios. It also provides an exhaustive list of and key details about the underlying deployment technologies.
## Deployment methods
The following table describes the available deployment methods for your Function
| Deployment&nbsp;type | Methods | Best for... | | -- | -- | -- |
-| Tools-based | &bull;&nbsp;[Visual&nbsp;Studio&nbsp;Code&nbsp;publish](functions-develop-vs-code.md#publish-to-azure)<br/>&bull;&nbsp;[Visual Studio publish](functions-develop-vs.md#publish-to-azure)<br/>&bull;&nbsp;[Core Tools publish](functions-run-local.md#publish) | Deployments during development and other ad hoc deployments. Deployments are managed locally by the tooling. |
+| Tools-based | &bull;&nbsp;[Visual&nbsp;Studio&nbsp;Code&nbsp;publish](functions-develop-vs-code.md#publish-to-azure)<br/>&bull;&nbsp;[Visual Studio publish](functions-develop-vs.md#publish-to-azure)<br/>&bull;&nbsp;[Core Tools publish](functions-run-local.md#publish) | Deployments during development and other ad hoc deployments. Deployments are managed locally by the tooling. |
| App Service-managed| &bull;&nbsp;[Deployment&nbsp;Center&nbsp;(CI/CD)](functions-continuous-deployment.md)<br/>&bull;&nbsp;[Container&nbsp;deployments](functions-create-function-linux-custom-image.md#enable-continuous-deployment-to-azure) | Continuous deployment (CI/CD) from source control or from a container registry. Deployments are managed by the App Service platform (Kudu).| | External pipelines|&bull;&nbsp;[Azure Pipelines](functions-how-to-azure-devops.md)<br/>&bull;&nbsp;[GitHub Actions](functions-how-to-github-actions.md) | Production and DevOps pipelines that include additional validation, testing, and other actions be run as part of an automated deployment. Deployments are managed by the pipeline. |
Some key concepts are critical to understanding how deployments work in Azure Fu
When you change any of your triggers, the Functions infrastructure must be aware of the changes. Synchronization happens automatically for many deployment technologies. However, in some cases, you must manually sync your triggers. When you deploy your updates by referencing an external package URL, local Git, cloud sync, or FTP, you must manually sync your triggers. You can sync triggers in one of three ways:
-* Restart your function app in the Azure portal.
-* Send an HTTP POST request to `https://{functionappname}.azurewebsites.net/admin/host/synctriggers?code=<API_KEY>` using the [master key](functions-bindings-http-webhook-trigger.md#authorization-keys).
-* Send an HTTP POST request to `https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>/providers/Microsoft.Web/sites/<FUNCTION_APP_NAME>/syncfunctiontriggers?api-version=2016-08-01`. Replace the placeholders with your subscription ID, resource group name, and the name of your function app.
++ Restart your function app in the Azure portal.++ Send an HTTP POST request to `https://{functionappname}.azurewebsites.net/admin/host/synctriggers?code=<API_KEY>` using the [master key](functions-bindings-http-webhook-trigger.md#authorization-keys).++ Send an HTTP POST request to `https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>/providers/Microsoft.Web/sites/<FUNCTION_APP_NAME>/syncfunctiontriggers?api-version=2016-08-01`. Replace the placeholders with your subscription ID, resource group name, and the name of your function app. When you deploy using an external package URL and the contents of the package change but the URL itself doesn't change, you need to manually restart your function app to fully sync your updates.
When an app is deployed to Windows, language-specific commands, like `dotnet res
To enable remote build on Linux, the following [application settings](functions-how-to-use-azure-function-app-settings.md#settings) must be set:
-* `ENABLE_ORYX_BUILD=true`
-* `SCM_DO_BUILD_DURING_DEPLOYMENT=true`
++ `ENABLE_ORYX_BUILD=true`++ `SCM_DO_BUILD_DURING_DEPLOYMENT=true` By default, both [Azure Functions Core Tools](functions-run-local.md) and the [Azure Functions Extension for Visual Studio Code](./create-first-function-vs-code-csharp.md#publish-the-project-to-azure) perform remote builds when deploying to Linux. Because of this, both tools automatically create these settings for you in Azure.
You can use an external package URL to reference a remote package (.zip) file th
> >If you use Azure Blob storage, use a private container with a [shared access signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) to give Functions access to the package. Any time the application restarts, it fetches a copy of the content. Your reference must be valid for the lifetime of the application.
->__When to use it:__ External package URL is the only supported deployment method for Azure Functions running on Linux in the Consumption plan, if the user doesn't want a [remote build](#remote-build) to occur. When you update the package file that a function app references, you must [manually sync triggers](#trigger-syncing) to tell Azure that your application has changed. When you change the contents of the package file and not the URL itself, you must also restart your function app manually.
+>__When to use it:__ External package URL is the only supported deployment method for Azure Functions running on Linux in the Consumption plan, if the user doesn't want a [remote build](#remote-build) to occur. When you update the package file that a function app references, you must [manually sync triggers](#trigger-syncing) to tell Azure that your application has changed. When you change the contents of the package file and not the URL itself, you must also restart your function app manually.
### Zip deploy
You can deploy a Linux container image that contains your function app.
>__How to use it:__ Create a Linux function app in the Premium or Dedicated plan and specify which container image to run from. You can do this in two ways: >
->* Create a Linux function app on an Azure App Service plan in the Azure portal. For **Publish**, select **Docker Image**, and then configure the container. Enter the location where the image is hosted.
->* Create a Linux function app on an App Service plan by using the Azure CLI. To learn how, see [Create a function on Linux by using a custom image](functions-create-function-linux-custom-image.md#create-supporting-azure-resources-for-your-function).
+>+ Create a Linux function app on an Azure App Service plan in the Azure portal. For **Publish**, select **Docker Image**, and then configure the container. Enter the location where the image is hosted.
+>+ Create a Linux function app on an App Service plan by using the Azure CLI. To learn how, see [Create a function on Linux by using a custom image](functions-create-function-linux-custom-image.md#create-supporting-azure-resources-for-your-function).
> >To deploy to a Kubernetes cluster as a custom container, in [Azure Functions Core Tools](functions-run-local.md), use the [`func kubernetes deploy`](functions-core-tools-reference.md#func-kubernetes-deploy) command.
In the portal-based editor, you can directly edit the files that are in your fun
>__When to use it:__ The portal is a good way to get started with Azure Functions. For more intense development work, we recommend that you use one of the following client tools: >
->* [Visual Studio Code](./create-first-function-vs-code-csharp.md)
->* [Azure Functions Core Tools (command line)](functions-run-local.md)
->* [Visual Studio](functions-create-your-first-function-visual-studio.md)
+>+ [Visual Studio Code](./create-first-function-vs-code-csharp.md)
+>+ [Azure Functions Core Tools (command line)](functions-run-local.md)
+>+ [Visual Studio](functions-create-your-first-function-visual-studio.md)
The following table shows the operating systems and languages that support portal editing:
The following table shows the operating systems and languages that support porta
## Deployment behaviors
-When you do a deployment, all existing executions are allowed to complete or time out, after which the new code is loaded to begin processing requests.
+When you deploy updates to your function app code, currently executing functions are terminated. After deployment completes, the new code is loaded to begin processing requests. Please review [Improve the performance and reliability of Azure Functions](performance-reliability.md#write-functions-to-be-stateless) to learn how to write stateless and defensive functions.
If you need more control over this transition, you should use deployment slots.
azure-functions Functions Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-powershell.md
Logging in PowerShell functions works like regular PowerShell logging. You can u
| - | -- | | Error | **`Write-Error`** | | Warning | **`Write-Warning`** |
-| Information | **`Write-Information`** <br/> **`Write-Host`** <br /> **`Write-Output`** <br/> Writes to _Information_ level logging. |
+| Information | **`Write-Information`** <br/> **`Write-Host`** <br /> **`Write-Output`** <br/> Writes to the `Information` log level. |
| Debug | **`Write-Debug`** | | Trace | **`Write-Progress`** <br /> **`Write-Verbose`** |
Functions lets you leverage [PowerShell gallery](https://www.powershellgallery.c
} ```
-When you create a new PowerShell functions project, dependency management is enabled by default, with the Azure [`Az` module](/powershell/azure/new-azureps-module-az) included. The maximum number of modules currently supported is 10. The supported syntax is _`MajorNumber`_`.*` or exact module version as shown in the following requirements.psd1 example:
+When you create a new PowerShell functions project, dependency management is enabled by default, with the Azure [`Az` module](/powershell/azure/new-azureps-module-az) included. The maximum number of modules currently supported is 10. The supported syntax is *`MajorNumber.*`* or exact module version, as shown in the following requirements.psd1 example:
```powershell @{
In this way, the older version of the Az.Account module is loaded first when the
The following considerations apply when using dependency management:
-+ Managed dependencies requires access to <https://www.powershellgallery.com> to download modules. When running locally, make sure that the runtime can access this URL by adding any required firewall rules.
++ Managed dependencies requires access to `https://www.powershellgallery.com` to download modules. When running locally, make sure that the runtime can access this URL by adding any required firewall rules. + Managed dependencies currently don't support modules that require the user to accept a license, either by accepting the license interactively, or by providing `-AcceptLicense` switch when invoking `Install-Module`.
Depending on your use case, Durable Functions may significantly improve scalabil
### Considerations for using concurrency
-PowerShell is a _single threaded_ scripting language by default. However, concurrency can be added by using multiple PowerShell runspaces in the same process. The amount of runspaces created, and therefore the number of concurrent threads per worker, is limited by the ```PSWorkerInProcConcurrencyUpperBound``` application setting. By default, the number of runspaces is set to 1,000 in version 4.x of the Functions runtime. In versions 3.x and below, the maximum number of runspaces is set to 1. The throughput will be impacted by the amount of CPU and memory available in the selected plan.
+PowerShell is a *single_threaded* scripting language by default. However, concurrency can be added by using multiple PowerShell runspaces in the same process. The number of runspaces created, and therefore the number of concurrent threads per worker, is limited by the `PSWorkerInProcConcurrencyUpperBound` application setting. By default, the number of runspaces is set to 1,000 in version 4.x of the Functions runtime. In versions 3.x and below, the maximum number of runspaces is set to 1. The throughput will be impacted by the amount of CPU and memory available in the selected plan.
-Azure PowerShell uses some _process-level_ contexts and state to help save you from excess typing. However, if you turn on concurrency in your function app and invoke actions that change state, you could end up with race conditions. These race conditions are difficult to debug because one invocation relies on a certain state and the other invocation changed the state.
+Azure PowerShell uses some *process-level* contexts and state to help save you from excess typing. However, if you turn on concurrency in your function app and invoke actions that change state, you could end up with race conditions. These race conditions are difficult to debug because one invocation relies on a certain state and the other invocation changed the state.
There's immense value in concurrency with Azure PowerShell, since some operations can take a considerable amount of time. However, you must proceed with caution. If you suspect that you're experiencing a race condition, set the PSWorkerInProcConcurrencyUpperBound app setting to `1` and instead use [language worker process level isolation](functions-app-settings.md#functions_worker_process_count) for concurrency.
azure-functions Performance Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/performance-reliability.md
Assume your function could encounter an exception at any time. Design your funct
Depending on how complex your system is, you may have: involved downstream services behaving badly, networking outages, or quota limits reached, etc. All of these can affect your function at any time. You need to design your functions to be prepared for it.
-How does your code react if a failure occurs after inserting 5,000 of those items into a queue for processing? Track items in a set that youΓÇÖve completed. Otherwise, you might insert them again next time. This double-insertion can have a serious impact on your work flow, so [make your functions idempotent](functions-idempotent.md).
+How does your code react if a failure occurs after inserting 5,000 of those items into a queue for processing? Track items in a set that youΓÇÖve completed. Otherwise, you might insert them again next time. This double-insertion can have a serious impact on your work flow, so [make your functions idempotent](functions-idempotent.md).
If a queue item was already processed, allow your function to be a no-op. Take advantage of defensive measures already provided for components you use in the Azure Functions platform. For example, see **Handling poison queue messages** in the documentation for [Azure Storage Queue triggers and bindings](functions-bindings-storage-queue-trigger.md#poison-messages).
+For HTTP based functions consider [API versioning strategies](/azure/architecture/reference-architectures/serverless/web-app#api-versioning) with Azure API Management. For example, if you have to update your HTTP based function app, deploy the new update to a separate function app and use API Management revisions or versions to direct clients to the new version or revision. Once all clients are using the version or revision and no more executions are left on the previous function app, you can deprovision the previous function app.
+ ## Function organization best practices As part of your solution, you may develop and publish multiple functions. These functions are often combined into a single function app, but they can also run in separate function apps. In Premium and dedicated (App Service) hosting plans, multiple function apps can also share the same resources by running in the same plan. How you group your functions and function apps can impact the performance, scaling, configuration, deployment, and security of your overall solution. There aren't rules that apply to every scenario, so consider the information in this section when planning and developing your functions.
azure-functions Run Functions From Deployment Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/run-functions-from-deployment-package.md
The following table indicates the recommended `WEBSITE_RUN_FROM_PACKAGE` options
## General considerations
-+ The package file must be .zip formatted. Tar and gzip formats aren't currently supported.
++ The package file must be .zip formatted. Tar and gzip formats aren't currently supported. + [Zip deployment](#integration-with-zip-deployment) is recommended. + When deploying your function app to Windows, you should set `WEBSITE_RUN_FROM_PACKAGE` to `1` and publish with zip deployment. + When you run from a package, the `wwwroot` folder becomes read-only and you'll receive an error when writing files to this directory. Files are also read-only in the Azure portal. + The maximum size for a deployment package file is currently 1 GB.
-+ You can't use local cache when running from a deployment package.
++ You can't use local cache when running from a deployment package. + If your project needs to use remote build, don't use the `WEBSITE_RUN_FROM_PACKAGE` app setting. Instead add the `SCM_DO_BUILD_DURING_DEPLOYMENT=true` deployment customization app setting. For Linux, also add the `ENABLE_ORYX_BUILD=true` setting. To learn more, see [Remote build](functions-deployment-technologies.md#remote-build). ### Adding the WEBSITE_RUN_FROM_PACKAGE setting
The following table indicates the recommended `WEBSITE_RUN_FROM_PACKAGE` options
## Using WEBSITE_RUN_FROM_PACKAGE = 1
-This section provides information about how to run your function app from a local package file.
+This section provides information about how to run your function app from a local package file.
### Considerations for deploying from an on-site package
-+ Using an on-site package is the recommended option for running from the deployment package, except on Linux hosted in a Consumption plan.
-+ [Zip deployment](#integration-with-zip-deployment) is the recommended way to upload a deployment package to your site.
-+ When not using zip deployment, make sure the `d:\home\data\SitePackages` (Windows) or `/home/data/SitePackages` (Linux) folder has a file named `packagename.txt`. This file contains only the name, without any whitespace, of the package file in this folder that's currently running.
++ Using an on-site package is the recommended option for running from the deployment package, except on Linux hosted in a Consumption plan.++ [Zip deployment](#integration-with-zip-deployment) is the recommended way to upload a deployment package to your site.++ When not using zip deployment, make sure the `d:\home\data\SitePackages` (Windows) or `/home/data/SitePackages` (Linux) folder has a file named `packagename.txt`. This file contains only the name, without any whitespace, of the package file in this folder that's currently running. ### Integration with zip deployment
-[Zip deployment][Zip deployment for Azure Functions] is a feature of Azure App Service that lets you deploy your function app project to the `wwwroot` directory. The project is packaged as a .zip deployment file. The same APIs can be used to deploy your package to the `d:\home\data\SitePackages` (Windows) or `/home/data/SitePackages` (Linux) folder.
+[Zip deployment][Zip deployment for Azure Functions] is a feature of Azure App Service that lets you deploy your function app project to the `wwwroot` directory. The project is packaged as a .zip deployment file. The same APIs can be used to deploy your package to the `d:\home\data\SitePackages` (Windows) or `/home/data/SitePackages` (Linux) folder.
With the `WEBSITE_RUN_FROM_PACKAGE` app setting value of `1`, the zip deployment APIs copy your package to the `d:\home\data\SitePackages` (Windows) or `/home/dat). > [!NOTE]
-> When a deployment occurs, a restart of the function app is triggered. Before a restart, all existing function executions are allowed to complete or time out. To learn more, see [Deployment behaviors](functions-deployment-technologies.md#deployment-behaviors).
+> When a deployment occurs, a restart of the function app is triggered. Function executions currently running during the deploy are terminated. Please review [Improve the performance and reliability of Azure Functions](performance-reliability.md#write-functions-to-be-stateless) to learn how to write stateless and defensive functions.
## Using WEBSITE_RUN_FROM_PACKAGE = URL
This section provides information about how to run your function app from a pack
<a name="troubleshooting"></a> + When running a function app on Windows, the app setting `WEBSITE_RUN_FROM_PACKAGE = <URL>` gives worse cold-start performance and isn't recommended.
-+ When you specify a URL, you must also [manually sync triggers](functions-deployment-technologies.md#trigger-syncing) after you publish an updated package.
++ When you specify a URL, you must also [manually sync triggers](functions-deployment-technologies.md#trigger-syncing) after you publish an updated package. + The Functions runtime must have permissions to access the package URL.
-+ You shouldn't deploy your package to Azure Blob Storage as a public blob. Instead, use a private container with a [Shared Access Signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the Functions runtime to access the package.
++ You shouldn't deploy your package to Azure Blob Storage as a public blob. Instead, use a private container with a [Shared Access Signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the Functions runtime to access the package. + When running on a Premium plan, make sure to [eliminate cold starts](functions-premium-plan.md#eliminate-cold-starts). + When running on a Dedicated plan, make sure you've enabled [Always On](dedicated-plan.md#always-on). + You can use the [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) to upload package files to blob containers in your storage account. ### Manually uploading a package to Blob Storage
-To deploy a zipped package when using the URL option, you must create a .zip compressed deployment package and upload it to the destination. This example deploys to a container in Blob Storage.
+To deploy a zipped package when using the URL option, you must create a .zip compressed deployment package and upload it to the destination. This example deploys to a container in Blob Storage.
1. Create a .zip package for your project using the utility of your choice. 1. In the [Azure portal](https://portal.azure.com), search for your storage account name or browse for it in storage accounts.
-
+ 1. In the storage account, select **Containers** under **Data storage**. 1. Select **+ Container** to create a new Blob Storage container in your account.
To deploy a zipped package when using the URL option, you must create a .zip com
1. After the upload completes, choose your uploaded blob file, and copy the URL. You may need to generate a SAS URL if you aren't [using an identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity)
-1. Search for your function app or browse for it in the **Function App** page.
+1. Search for your function app or browse for it in the **Function App** page.
1. In your function app, select **Configurations** under **Settings**.
azure-government Documentation Government Ase Disa Cap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-ase-disa-cap.md
The customer has deployed an ASE with an ILB and has implemented an ExpressRoute
When creating the ASE via the portal, a route table with a default route of 0.0.0.0/0 and next hop ΓÇ£InternetΓÇ¥ is created. However, since DISA advertises a default route out the ExpressRoute circuit, the User Defined Route (UDR) should either be deleted, or remove the default route to internet.
-You will need to create new routes in the UDR for the management addresses in order to keep the ASE healthy. For Azure Government ranges see [App Service Environment management addresses](../app-service/environment/management-addresses.md
-)
+You will need to create new routes in the UDR for the management addresses in order to keep the ASE healthy. For Azure Government ranges, see [App Service Environment management addresses](../app-service/environment/management-addresses.md).
- 23.97.29.209/32 --> Internet - 13.72.53.37/32 --> Internet
azure-maps Migrate From Bing Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md
Here are some licensing-related resources for Azure Maps:
Here's an example of a high-level migration plan. 1. Take inventory of what Bing Maps SDKs and services your application is using and verify that Azure Maps provides alternative SDKs and services for you to migrate to.
-2. Create an Azure subscription (if you donΓÇÖt already have one) at <https://azure.com>.
+2. Create an Azure subscription (if you donΓÇÖt already have one) at [azure.com](https://azure.com).
3. Create an Azure Maps account ([documentation](./how-to-manage-account-keys.md)) and authentication key or Azure Active Directory ([documentation](./how-to-manage-authentication.md)). 4. Migrate your application code.
Here is a list of useful technical resources for Azure Maps.
## Migration support
-Developers can seek migration support through the [forums](/answers/topics/azure-maps.html) or through one of the many Azure support options: <https://azure.microsoft.com/support/options/>
+Developers can seek migration support through the [forums](/answers/topics/azure-maps.html) or through one of the many [Azure support options](https://azure.microsoft.com/support/options/).
## New terminology
azure-maps Web Sdk Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md
If your data meets one of the following criteria, be sure to specify the min and
* If the data is coming from a vector tile source, often source layers for different data types are only available through a range of zoom levels. * If using a tile layer that doesn't have tiles for all zoom levels 0 through 24 and you want it to only rendering at the levels it has tiles, and not try to fill in missing tiles with tiles from other zoom levels. * If you only want to render a layer at certain zoom levels.
-All layers have a `minZoom` and `maxZoom` option where the layer will be rendered when between these zoom levels based on this logic ` maxZoom > zoom >= minZoom`.
+All layers have a `minZoom` and `maxZoom` option where the layer will be rendered when between these zoom levels based on this logic `maxZoom > zoom >= minZoom`.
**Example**
var layer = new atlas.layer.HeatMapLayer(source, null, {
}); ```
-Learn more in the [Clustering and heat maps in this document](clustering-point-data-web-sdk.md #clustering-and-the-heat-maps-layer)
+Learn more in the [Clustering and heat maps in this document](clustering-point-data-web-sdk.md#clustering-and-the-heat-maps-layer)
### Keep image resources small
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
description: Common issues with Azure Monitor metric alerts and possible solutio
Previously updated : 5/25/2022 Last updated : 6/23/2022 ms:reviwer: harelbr # Troubleshooting problems in Azure Monitor metric alerts
Metric alerts are stateful by default, and therefore additional alerts are not f
- If you're creating the alert rule programmatically (for example, via [Resource Manager](./alerts-metric-create-templates.md), [PowerShell](/powershell/module/az.monitor/), [REST](/rest/api/monitor/metricalerts/createorupdate), [CLI](/cli/azure/monitor/metrics/alert)), set the *autoMitigate* property to 'False'. - If you're creating the alert rule via the Azure portal, uncheck the 'Automatically resolve alerts' option (available under the 'Alert rule details' section).
-<sup>1</sup> For stateless metric alert rules, an alert will trigger once every 5 minutes at a minimum, even if the frequency of evaluation is equal or less than 5 minutes and the condition is still being met.
+<sup>1</sup> For stateless metric alert rules, an alert will trigger once every 10 minutes at a minimum, even if the frequency of evaluation is equal or less than 5 minutes and the condition is still being met.
> [!NOTE] > Making a metric alert rule stateless prevents fired alerts from becoming resolved, so even after the condition isnΓÇÖt met anymore, the fired alerts will remain in a fired state until the 30 days retention period.
The table below lists the metrics that aren't supported by dynamic thresholds.
## Next steps -- For general troubleshooting information about alerts and notifications, see [Troubleshooting problems in Azure Monitor alerts](alerts-troubleshoot.md).
+- For general troubleshooting information about alerts and notifications, see [Troubleshooting problems in Azure Monitor alerts](alerts-troubleshoot.md).
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Enabling monitoring on your ASP.NET Core based web applications running on [Azur
[Trim self-contained deployments](/dotnet/core/deploying/trimming/trim-self-contained) is **not supported**. Use [manual instrumentation](./asp-net-core.md) via code instead.
-See the [enable monitoring section](#enable-monitoring ) below to begin setting up Application Insights with your App Service resource.
+See the [enable monitoring section](#enable-monitoring) below to begin setting up Application Insights with your App Service resource.
# [Linux](#tab/Linux)
See the [enable monitoring section](#enable-monitoring ) below to begin setting
[Trim self-contained deployments](/dotnet/core/deploying/trimming/trim-self-contained) is **not supported**. Use [manual instrumentation](./asp-net-core.md) via code instead.
-See the [enable monitoring section](#enable-monitoring ) below to begin setting up Application Insights with your App Service resource.
+See the [enable monitoring section](#enable-monitoring) below to begin setting up Application Insights with your App Service resource.
azure-monitor Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/correlation.md
Add the following configuration:
distributedTracingMode: 2 // DistributedTracingModes.W3C ``` > [!IMPORTANT]
-> To see all configurations required to enable correlation, see the [JavaScript correlation documentation](./javascript.md#enable-correlation).
+> To see all configurations required to enable correlation, see the [JavaScript correlation documentation](./javascript.md#enable-distributed-tracing).
## Telemetry correlation in OpenCensus Python
azure-monitor Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing.md
The Application Insights agents and/or SDKs for .NET, .NET Core, Java, Node.js,
* [.NET Core](asp-net-core.md) * [Java](./java-in-process-agent.md) * [Node.js](../app/nodejs.md)
-* [JavaScript](./javascript.md#enable-correlation)
+* [JavaScript](./javascript.md#enable-distributed-tracing)
* [Python](opencensus-python.md) With the proper Application Insights SDK installed and configured, tracing information is automatically collected for popular frameworks, libraries, and technologies by SDK dependency auto-collectors. The full list of supported technologies is available in [the Dependency auto-collection documentation](./auto-collect-dependencies.md).
azure-monitor Java 2X Micrometer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-2x-micrometer.md
Steps:
To learn more about metrics, refer to the [Micrometer documentation](https://micrometer.io/docs/).
-Other sample code on how to create different types of metrics can be found in[the official Micrometer GitHub repo](https://github.com/micrometer-metrics/micrometer/tree/master/samples/micrometer-samples-core/src/main/java/io/micrometer/core/samples).
+Other sample code on how to create different types of metrics can be found in the [official Micrometer GitHub repo](https://github.com/micrometer-metrics/micrometer/tree/master/samples/micrometer-samples-core/src/main/java/io/micrometer/core/samples).
## How to bind additional metrics collection
azure-monitor Javascript Angular Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-angular-plugin.md
export class AppModule { }
Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
-In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-correlation).
+In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing).
### Route tracking
azure-monitor Javascript Click Analytics Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-click-analytics-plugin.md
appInsights.loadAppInsights();
Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
-In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-correlation).
+In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing).
## Sample app
azure-monitor Javascript React Native Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-native-plugin.md
appInsights.loadAppInsights();
Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
-In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-correlation).
+In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing).
### PageView
azure-monitor Javascript React Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-plugin.md
The `AppInsightsErrorBoundary` requires two props to be passed to it, the `React
Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
-In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-correlation).
+In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing).
### Route tracking
azure-monitor Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript.md
Cookie Configuration for instance-based cookie management added in version 2.6.0
By setting `autoTrackPageVisitTime: true`, the time in milliseconds a user spends on each page is tracked. On each new PageView, the duration the user spent on the *previous* page is sent as a [custom metric](../essentials/metrics-custom-overview.md) named `PageVisitTime`. This custom metric is viewable in the [Metrics Explorer](../essentials/metrics-getting-started.md) as a "log-based metric".
-## Enable Correlation
+## Enable Distributed Tracing
Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
See the section below on filtering data with transformations for a summary on wh
### Multi-homing agents You should be cautious with any configuration using multi-homed agents where a single virtual machine sends data to multiple workspaces since you may be incurring charges for the same data multiple times. If you do multi-home agents, ensure that you're sending unique data to each workspace.
-You can also collect duplicate data with a single virtual machine running both the Azure Monitor agent and Log Analytics agent, even if they're both sending data to the same workspace. While the agents can coexist, each works independently without any knowledge of the other. You should continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](logs/../agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each are collecting unique data.
+You can also collect duplicate data with a single virtual machine running both the Azure Monitor agent and Log Analytics agent, even if they're both sending data to the same workspace. While the agents can coexist, each works independently without any knowledge of the other. You should continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](./agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each are collecting unique data.
See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to ensure that you aren't collecting duplicate data for the same machine.
There are multiple methods that you can use to limit the amount of data collecte
* **Sampling**: [Sampling](app/sampling.md) is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics.
-* **Limit Ajax calls**: [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. Note that disabling Ajax calls will disable [JavaScript correlation](app/javascript.md#enable-correlation).
+* **Limit Ajax calls**: [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. Note that disabling Ajax calls will disable [JavaScript correlation](app/javascript.md#enable-distributed-tracing).
* **Disable unneeded modules**: [Edit ApplicationInsights.config](app/configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required.
azure-monitor Azure Networking Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-networking-analytics.md
The following table shows data collection methods and other details about how da
![Screenshot of the Diagnostics Settings config for Application Gateway resource.](media/azure-networking-analytics/diagnostic-settings-1.png)
- [ ![Screenshot of the page for configuring Diagnostics settings.](media/azure-networking-analytics/diagnostic-settings-2.png)](media/azure-networking-analytics/application-gateway-diagnostics-2.png#lightbox)
+ [![Screenshot of the page for configuring Diagnostics settings.](media/azure-networking-analytics/diagnostic-settings-2.png)](media/azure-networking-analytics/application-gateway-diagnostics-2.png#lightbox)
5. Click the checkbox for *Send to Log Analytics*. 6. Select an existing Log Analytics workspace, or create a workspace.
Set-AzDiagnosticSetting -ResourceId $gateway.ResourceId -WorkspaceId $workspace
Application insights can be accessed via the insights tab within your Application Gateway resource.
-![Screenshot of Application Gateway insights ](media/azure-networking-analytics/azure-appgw-insights.png
-)
+![Screenshot of Application Gateway insights](media/azure-networking-analytics/azure-appgw-insights.png)
The "view detailed metrics" tab will open up the pre-populated workbook summarizing the data from your Application Gateway.
-[ ![Screenshot of Application Gateway workbook ](media/azure-networking-analytics/azure-appgw-workbook.png)](media/azure-networking-analytics/application-gateway-workbook.png#lightbox)
+[![Screenshot of Application Gateway workbook](media/azure-networking-analytics/azure-appgw-workbook.png)](media/azure-networking-analytics/application-gateway-workbook.png#lightbox)
### New capabilities with Azure Monitor Network Insights workbook
To find more information about the capabilities of the new workbook solution che
> [!NOTE] > All past data is already available within the workbook from the point diagnostic settings were originally enabled. There is no data transfer required.
-2. Access the [default insights workbook](#accessing-azure-application-gateway-analytics-via-azure-monitor-network-insights) for your Application Gateway resource. All existing insights supported by the Application Gateway analytics solution will be already present in the workbook. You can extend this by adding custom [visualizations](../visualize/workbooks-overview.md#visualizations) based on metric & log data.
+2. Access the [default insights workbook](#accessing-azure-application-gateway-analytics-via-azure-monitor-network-insights) for your Application Gateway resource. All existing insights supported by the Application Gateway analytics solution will be already present in the workbook. You can extend this by adding custom [visualizations](../visualize/workbooks-overview.md#visualizations) based on metric and log data.
3. After you are able to see all your metric and log insights, to clean up the Azure Gateway analytics solution from your workspace, you can delete the solution from the solution resource page.
-[ ![Screenshot of the delete option for Azure Application Gateway analytics solution.](media/azure-networking-analytics/azure-appgw-analytics-delete.png)](media/azure-networking-analytics/application-gateway-analytics-delete.png#lightbox)
+[![Screenshot of the delete option for Azure Application Gateway analytics solution.](media/azure-networking-analytics/azure-appgw-analytics-delete.png)](media/azure-networking-analytics/application-gateway-analytics-delete.png#lightbox)
## Troubleshooting
azure-monitor Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/authentication-authorization.md
Before beginning, make sure you have all the values required to make OAuth2 call
### Client Credentials Flow In the client credentials flow, the token is used with the ARM endpoint. A single request is made to receive a token, using the application permissions provided during the Azure AD application setup.
-The resource requested is: <https://management.azure.com/>.
+The resource requested is: `https://management.azure.com`.
You can also use this flow to request a token to `https://api.loganalytics.io`. Replace the "resource" in the example. #### Client Credentials Token URL (POST request)
azure-monitor Batch Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/batch-queries.md
The Azure Monitor Log Analytics API supports batching queries together. Batch queries currently require Azure AD authentication. ## Request format
-To batch queries, use the API endpoint, adding $batch at the end of the URL: <https://api.loganalytics.io/v1/$batch>.
+To batch queries, use the API endpoint, adding $batch at the end of the URL: `https://api.loganalytics.io/v1/$batch`.
If no method is included, batching defaults to the GET method. On GET requests, the API ignores the body parameter of the request object.
azure-monitor Cost Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md
The default pricing for Log Analytics is a Pay-As-You-Go model that's based on i
Data volume is measured as the size of the data that will be stored in GB (10^9 bytes). The data size of a single record is calculated from a string representation of the columns that are stored in the Log Analytics workspace for that record, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any custom columns added by the [custom logs API](custom-logs-overview.md), [ingestion-time transformations](ingestion-time-transformations.md), or [custom fields](custom-fields.md) that are added as data is collected and then stored in the workspace. >[!NOTE]
->The billable data volume calculation is substantially smaller than the size of the entire incoming JSON-packaged event, often less than 50% for small events. It is essential to understand this calculation of billed data size when estimating costs and comparing to other pricing models.
+>The billable data volume calculation is substantially smaller than the size of the entire incoming JSON-packaged event. On average across all event types, the billed size is about 25% less than the incoming data size. This can be up to 50% for small events. It is essential to understand this calculation of billed data size when estimating costs and comparing to other pricing models.
### Excluded columns The following [standard columns](log-standard-columns.md) that are common to all tables, are excluded in the calculation of the record size. All other columns stored in Log Analytics are included in the calculation of the record size.
azure-monitor Workbooks Add Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-add-text.md
- Title: Azure Workbooks text parameters
-description: Learn about adding text parameters to your Azure workbook.
---- Previously updated : 05/30/2022---
-# Adding text to your workbook
-
-Workbooks allow authors to include text blocks in their workbooks. The text can be human analysis of the telemetry, information to help users interpret the data, section headings, etc.
-
- :::image type="content" source="media/workbooks-add-text/workbooks-text-example.png" alt-text="Screenshot of adding text to a workbook.":::
-
-Text is added through a markdown control - into which an author can add their content. An author can use the full formatting capabilities of markdown to make their documents appear just how they want it. These include different heading and font styles, hyperlinks, tables, etc. This allows authors to create rich Word- or Portal-like reports or analytic narratives. Text Steps can contain parameter values in the markdown text, and those parameter references will be updated as the parameters change.
-
-**Edit mode**:
- :::image type="content" source="media/workbooks-add-text/workbooks-text-control-edit-mode.png" alt-text="Screenshot showing adding text to a workbook in edit mode.":::
-
-**Preview mode**:
- :::image type="content" source="media/workbooks-add-text/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot showing adding text to a workbook in preview mode.":::
-
-## Add text
-1. Switch the workbook to edit mode by clicking on the _Edit_ toolbar item.
-1. Use the _Add_ button below a step or at the bottom of the workbook, and choose "Add Text" to add a text control to the workbook.
-1. Enter markdown text into the editor field
-1. Use the _Text Style_ option to switch between plain markdown, and markdown wrapped with the Azure portal's standard info/warning/success/error styling.
-
- > [!TIP]
- > Use this [markdown cheat sheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) to see the different formatting options.
-
-1. Use the Preview tab to see how your content will look. While editing, the preview will show the content inside a scrollable area to limit its size, but when displayed at runtime, the markdown content will expand to fill whatever space it needs, with no scrollbars.
-1. Select the _Done Editing_ button to complete editing the step
-
-## Text styles
-The following text styles are available for text steps:
-
-| Style | Description |
-| | |
-| `plain` | No other formatting is applied |
-| `info` | The portal's "info" style, with a `ℹ` or similar icon and blue background |
-| `error` | The portal's "error" style, with a `❌` or similar icon and red background |
-| `success` | The portal's "success" style, with a `Γ£ö` or similar icon and green background |
-| `upsell` | The portal's "upsell" style, with a `🚀` or similar icon and purple background |
-| `warning` | The portal's "warning" style, with a `ΓÜá` or similar icon and blue background |
--
-Instead of picking a specific style, you may also choose a text parameter as the source of the style. The parameter value must be one of the above text values. The absence of a value, or any unrecognized value will be treated as `plain` style.
-
-### info style example:
- :::image type="content" source="media/workbooks-add-text/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot of adding text to a workbook in preview mode showing info style.":::
-
-### warning style example:
- :::image type="content" source="media/workbooks-add-text/workbooks-text-example-warning.png" alt-text="Screenshot of a text visualization in warning style.":::
-
-## Next Steps
-- [Add Workbook parameters](workbooks-parameters.md)
azure-monitor Workbooks Combine Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-combine-data.md
- Title: Combine data from different sources in your Azure Workbook
-description: Learn how to combine data from different sources in your Azure Workbook.
---- Previously updated : 05/30/2022---
-# Combine data from different sources
-
-It is often necessary to bring together data from different sources that enhance the insights experience. An example is augmenting active alert information with related metric data. This allows users to see not just the effect (an active alert), but also potential causes (for example, high CPU usage). The monitoring domain has numerous such correlatable data sources that are often critical to the triage and diagnostic workflow.
-
-Workbooks allow not just the querying of different data sources, but also provides simple controls that allow you to merge or join the data to provide rich insights. The `merge` control is the way to achieve it.
-
-## Combining alerting data with Log Analytics VM performance data
-
-The example below combines alerting data with Log Analytics VM performance data to get a rich insights grid.
-
-![Screenshot of a workbook with a merge control that combines alert and log analytics data.](./media/workbooks-data-sources/merge-control.png)
-
-## Using merge control to combine Azure Resource Graph and Log Analytics data
-
-Here is a tutorial on using the merge control to combine Azure Resource Graph and Log Analytics data:
-
-[![Combining data from different sources in workbooks](https://img.youtube.com/vi/7nWP_YRzxHg/0.jpg)](https://www.youtube.com/watch?v=7nWP_YRzxHg "Video showing how to combine data from different sources in workbooks.")
-
-Workbooks support these merges:
-
-* Inner unique join
-* Full inner join
-* Full outer join
-* Left outer join
-* Right outer join
-* Left semi-join
-* Right semi-join
-* Left anti-join
-* Right anti-join
-* Union
-* Duplicate table
-
-## Next steps
azure-monitor Workbooks Create Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-create-workbook.md
Title: Create an Azure Workbook
+ Title: Creating an Azure Workbook
description: Learn how to create an Azure Workbook.
Last updated 05/30/2022
-# Create an Azure Workbook
+# Creating an Azure Workbook
+This article describes how to create a new workbook and how to add elements to your Azure Workbook.
-This video provides a walkthrough of creating workbooks.
+This video walks you through creating workbooks.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4B4Ap]
-## To create a new Azure Workbook
+## Create a new Azure Workbook
To create a new Azure workbook: 1. From the Azure Workbooks page, select an empty template or select **New** in the top toolbar.
-1. Combine any of these steps to include the elements you want to the workbook:
- - [Add text to your workbook](workbooks-add-text.md)
- - [Add parameters to your workbook](workbooks-parameters.md)
- - Add queries to your workbook
- - [Combine data from different sources](workbooks-combine-data.md)
- - Add Metrics to your workbook
- - Add Links to your workbook
- - Add Groups to your workbook
- - Add more configuration options to your workbook
--
-## Next steps
-- [Getting started with Azure Workbooks](workbooks-getting-started.md).-- [Azure workbooks data sources](workbooks-data-sources.md).
+1. Combine any of these elements to add to your workbook:
+ - [Text](#adding-text)
+ - Parameters
+ - [Queries](#adding-queries)
+ - [Metric charts](#adding-metric-charts)
+ - [Links](#adding-links)
+ - Groups
+ - Configuration options
+
+## Adding text
+
+Workbooks allow authors to include text blocks in their workbooks. The text can be human analysis of the telemetry, information to help users interpret the data, section headings, etc.
+
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-example.png" alt-text="Screenshot of adding text to a workbook.":::
+
+Text is added through a markdown control into which an author can add their content. An author can use the full formatting capabilities of markdown. These include different heading and font styles, hyperlinks, tables, etc. Markdown allows authors to create rich Word- or Portal-like reports or analytic narratives. Text can contain parameter values in the markdown text, and those parameter references will be updated as the parameters change.
+
+**Edit mode**:
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode.png" alt-text="Screenshot showing adding text to a workbook in edit mode.":::
+
+**Preview mode**:
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot showing adding text to a workbook in preview mode.":::
+
+### Add text to an Azure workbook
+1. Make sure you are in **Edit** mode. Add a query by doing any one of the following:
+ - Select **Add**, and **Add text** below an existing element, or at the bottom of the workbook.
+ - Select the ellipses (...) to the right of the **Edit** button next to one of the elements in the workbook, then select **Add** and then **Add text**.
+1. Enter markdown text into the editor field.
+1. Use the **Text Style** option to switch between plain markdown, and markdown wrapped with the Azure portal's standard info/warning/success/error styling.
+
+ > [!TIP]
+ > Use this [markdown cheat sheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) to see the different formatting options.
+
+1. Use the **Preview** tab to see how your content will look. The preview shows the content inside a scrollable area to limit its size, but when displayed at runtime, the markdown content will expand to fill whatever space it needs, without a scrollbar.
+1. Select **Done Editing**.
+
+### Text styles
+These text styles are available:
+
+| Style | Description |
+| | |
+| plain| No formatting is applied |
+|info| The portal's "info" style, with a `ℹ` or similar icon and blue background |
+|error| The portal's "error" style, with a `❌` or similar icon and red background |
+|success| The portal's "success" style, with a `Γ£ö` or similar icon and green background |
+|upsell| The portal's "upsell" style, with a `🚀` or similar icon and purple background |
+|warning| The portal's "warning" style, with a `ΓÜá` or similar icon and blue background |
++
+You can also choose a text parameter as the source of the style. The parameter value must be one of the above text values. The absence of a value, or any unrecognized value will be treated as `plain` style.
+
+### Text style examples
+
+**Info style example**:
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-control-edit-mode-preview.png" alt-text="Screenshot of adding text to a workbook in preview mode showing info style.":::
+
+**Warning style example**:
+ :::image type="content" source="media/workbooks-create-workbook/workbooks-text-example-warning.png" alt-text="Screenshot of a text visualization in warning style.":::
+
+## Adding queries
+
+Azure Workbooks allow you to query any of the supported workbook [data sources](workbooks-data-sources.md).
+
+For example, you can query Azure Resource Health to help you view any service problems affecting your resources, or you can query Azure Monitor Metrics, which is numeric data collected at regular intervals and provide information about an aspect of a system at a particular time.
+
+### Add a query to an Azure Workbook
+
+1. Make sure you are in **Edit** mode. Add a query by doing any one of the following:
+ - Select **Add**, and **Add query** below an existing element, or at the bottom of the workbook.
+ - Select the ellipses (...) to the right of the **Edit** button next to one of the elements in the workbook, then select **Add** and then **Add query**.
+1. Select the [data source](workbooks-data-sources.md) for your query. The other fields are determined based on the data source you choose.
+1. Select any other values that are required based on the data source you selected.
+1. Select the [visualization](workbooks-visualizations.md) for your workbook.
+1. In the query section, enter your query, or select from a list of sample queries by selecting **Samples**, and then edit the query to your liking.
+1. Select **Run Query**.
+1. When you are sure you have the query you want in your workbook, select **Done editing**.
+
+## Adding metric charts
+
+Most Azure resources emit metric data about state and health such as CPU utilization, storage availability, count of database transactions, failing app requests, etc. Workbooks allow the visualization of this data as time-series charts.
+
+The example below shows the number of transactions in a storage account over the prior hour. This allows the storage owner to see the transaction trend and look for anomalies in behavior.
++
+### Add a metric chart to an Azure Workbook
+1. Make sure you are in **Edit** mode. Add a query by doing any one of the following:
+ - Select **Add**, and **Add metric** below an existing element, or at the bottom of the workbook.
+ - Select the ellipses (...) to the right of the **Edit** button next to one of the elements in the workbook, then select **Add** and then **Add metric**.
+3. Select a resource type (for example, Storage Account), the resources to target, the metric namespace and name, and the aggregation to use.
+4. Set other parameters if needed - like time range, split-by, visualization, size and color palette.
+
+Here is the edit mode version of the metric chart above:
++
+### Metric chart parameters
+
+| Parameter | Explanation | Example |
+| - |:-|:-|
+| Resource Type| The resource type to target | Storage or Virtual Machine. |
+| Resources| A set of resources to get the metrics value from | MyStorage1 |
+| Namespace | The namespace with the metric | Storage > Blob |
+| Metric| The metric to visualize | Storage > Blob > Transactions |
+| Aggregation | The aggregation function to apply to the metric | Sum, Count, Average, etc. |
+| Time Range | The time window to view the metric in | Last hour, Last 24 hours, etc. |
+| Visualization | The visualization to use | Area, Bar, Line, Scatter, Grid |
+| Split By| Optionally split the metric on a dimension | Transactions by Geo type |
+| Size | The vertical size of the control | Small, medium or large |
+| Color palette | The color palette to use in the chart. Ignored if the `Split by` parameter is used | Blue, green, red, etc. |
+
+### Metric chart examples
+**Transactions split by API name as a line chart**
+++
+**Transactions split by response type as a large bar chart**
++
+**Average latency as a scatter chart**
++
+## Adding links
+
+Authors can use links to create links to other views, workbooks, other items inside a workbook, or to create tabbed views within a workbook. The links can be styled as hyperlinks, buttons, and tabs.
+
+### Link styles
+You can apply styles to the link element itself as well as to individual links.
+
+**Link element styles**
++
+|Style |Sample |Notes |
+||||
+|Bullet List | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-bullet.png" alt-text="Screenshot of bullet style workbook link."::: | The default, links, appears as a bulleted list of links, one on each line. The **Text before link** and **Text after link** fields can be used to add additional text before or after the link items. |
+|List |:::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-list.png" alt-text="Screenshot of list style workbook link."::: | Links appear as a list of links, with no bullets. |
+|Paragraph | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-paragraph.png" alt-text="Screenshot of paragraph style workbook link."::: |Links appear as a paragraph of links, wrapped like a paragraph of text. |
+|Navigation | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-navigation.png" alt-text="Screenshot of navigation style workbook link."::: | Links appear as links, with vertical dividers, or pipes (`|`) between each link. |
+|Tabs | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-tabs.png" alt-text="Screenshot of tabs style workbook link."::: |Links appear as tabs. Each link appears as a tab, no link styling options apply to individual links. See the [tabs](#using-tabs) section below for how to configure tabs. |
+|Toolbar | :::image type="content" source="media/workbooks-create-workbook/workbooks-link-style-toolbar.png" alt-text="Screenshot of toolbar style workbook link."::: | Links appear an Azure Portal styled toolbar, with icons and text. Each link appears as a toolbar button. See the [toolbar](#using-toolbars) section below for how to configure toolbars. |
++
+**Link styles**
+
+| Style | Description |
+|:- |:-|
+| Link | The default - links appears as a hyperlink. URL links can only be link style. |
+| Button (Primary) | the link appears as a "primary" button in the portal, usually a blue color |
+| Button (Secondary) | the links appear as a "secondary" button in the portal, usually a "transparent" color, a white button in light themes and a dark gray button in dark themes. |
+
+When using buttons, if required parameters are used in Button text, Tooltip text, or Value fields, and the required parameter is unset, the button will be disabled. For example, this can be used to disable buttons when no value is selected in another parameter/control.
+
+### Link actions
+Links can use all of the link actions available in [link actions](workbooks-link-actions.md), and have 2 more available actions:
+
+| Action | Description |
+|:- |:-|
+|Set a parameter value | When selecting a link/button/tab, a parameter can be set to a value. Commonly tabs are configured to set a parameter to a value, which hides and shows other parts of the workbook based on that value |
+|Scroll to a step| When selecting a link, the workbook will move focus and scroll to make another step visible. This action can be use to create a "table of contents", or a "go back to the top" style experience. |
+
+### Using tabs
+
+Most of the time, tab links are combined with the **Set a parameter value** action. Here's an example showing the links step configured to create 2 tabs, where select either tab will set a **selectedTab** parameter to a different value (the example shows a third tab being edited to show the parameter name and parameter value placeholders):
+++
+You can then add other items in the workbook that are conditionally visible if the **selectedTab** parameter value is "1" by using the advanced settings:
++
+When using tabs, the first tab will be selected by default, initially setting **selectedTab** to 1, and making that step visible. Selecting the second tab will change the value of the parameter to "2", and different content will be displayed:
++
+A sample workbook with the above tabs is available in [sample Azure Workbooks with links](workbooks-sample-links.md#sample-workbook-with-links).
+
+### Tabs limitations
+ - When using tabs, URL links are not supported. A URL link in a tab appears as a disabled tab.
+ - When using tabs, no item styling is available. Items will only be displayed as tabs, and only the tab name (link text) field will be displayed. Fields that are not used in tab style are hidden while in edit mode.
+- When using tabs, the first tab will become selected by default, invoking whatever action that tab has specified. If the first tab's action opens another view, that means as soon as the tabs are created, a view appears.
+- While having tabs open other views is *supported*, it should be used sparingly, as most users won't expect selecting a tab to navigate. (Also, if other tabs are setting parameter to a specific value, a tab that opens a view would not change that value, so the rest of the workbook content will continue to show the view/data for the previous tab.)
+
+### Using toolbars
+
+To have your links appear styled as a toolbar, use the Toolbar style. In toolbar style, the author must fill in fields for:
+- Button text, the text to display on the toolbar. Parameters may be used in this field.
+- Icon, the icon to display in the toolbar.
+- Tooltip Text, text to be displayed on the toolbar button's tooltip text. Parameters may be used in this field.
++
+If any required parameters are used in Button text, Tooltip text, or Value fields, and the required parameter is unset, the toolbar button will be disabled. For example, this can be used to disable toolbar buttons when no value is selected in another parameter/control.
+
+A sample workbook with toolbars, globals parameters, and ARM Actions is available in [sample Azure Workbooks with links](workbooks-sample-links.md#sample-workbook-with-toolbar-links).
azure-monitor Workbooks Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-data-sources.md
Workbooks can extract data from these data sources:
- [Metrics](#metrics) - [Azure Resource Graph](#azure-resource-graph) - [Azure Resource Manager](#azure-resource-manager)
+ - [Azure Data Explorer](#azure-data-explorer)
- [JSON](#json)
+ - [Merge](#merge)
- [Custom endpoint](#custom-endpoint)
+ - [Workload health](#workload-health)
+ - [Azure resource health](#azure-resource-health)
- [Azure RBAC](#azure-rbac)-
+ - [Change Analysis (preview)](#change-analysis-preview)
## Logs Workbooks allow querying logs from the following sources:
Workbooks allow querying logs from the following sources:
Workbook authors can use KQL queries that transform the underlying resource data to select a result set that can visualized as text, charts, or grids.
-![Screenshot of workbooks logs report interface](./media/workbooks-data-sources/logs.png)
+![Screenshot of workbooks logs report interface.](./media/workbooks-data-sources/logs.png)
Workbook authors can easily query across multiple resources creating a truly unified rich reporting experience.
Workbook authors can easily query across multiple resources creating a truly uni
Azure resources emit [metrics](../essentials/data-platform-metrics.md) that can be accessed via workbooks. Metrics can be accessed in workbooks through a specialized control that allows you to specify the target resources, the desired metrics, and their aggregation. This data can then be plotted in charts or grids.
-![Screenshot of workbook metrics charts of cpu utilization](./media/workbooks-data-sources/metrics-graph.png)
+![Screenshot of workbook metrics charts of cpu utilization.](./media/workbooks-data-sources/metrics-graph.png)
-![Screenshot of workbook metrics interface](./media/workbooks-data-sources/metrics.png)
+![Screenshot of workbook metrics interface.](./media/workbooks-data-sources/metrics.png)
## Azure Resource Graph
Workbooks support querying for resources and their metadata using Azure Resource
To make a query control use this data source, use the Query type drop-down to choose Azure Resource Graph and select the subscriptions to target. Use the Query control to add the ARG KQL-subset that selects an interesting resource subset.
-![Screenshot of Azure Resource Graph KQL query](./media/workbooks-data-sources/azure-resource-graph.png)
+![Screenshot of Azure Resource Graph KQL query.](./media/workbooks-data-sources/azure-resource-graph.png)
## Azure Resource Manager
Workbook supports Azure Resource Manager REST operations. This allows the abilit
To make a query control use this data source, use the Data source drop-down to choose Azure Resource Manager. Provide the appropriate parameters such as Http method, url path, headers, url parameters and/or body. > [!NOTE]
-> Only `GET`, `POST`, and `HEAD` operations are currently supported.
+> Only GET, POST, and HEAD operations are currently supported.
## Azure Data Explorer Workbooks now have support for querying from [Azure Data Explorer](/azure/data-explorer/) clusters with the powerful [Kusto](/azure/kusto/query/index) query language. For the **Cluster Name** field, you should add the region name following the cluster name. For example: *mycluster.westeurope*.
-![Screenshot of Kusto query window](./media/workbooks-data-sources/data-explorer.png)
+![Screenshot of Kusto query window.](./media/workbooks-data-sources/data-explorer.png)
-## Workload health
+## JSON
-Azure Monitor has functionality that proactively monitors the availability and performance of Windows or Linux guest operating systems. Azure Monitor models key components and their relationships, criteria for how to measure the health of those components, and which components alert you when an unhealthy condition is detected. Workbooks allow users to use this information to create rich interactive reports.
+The JSON provider allows you to create a query result from static JSON content. It is most commonly used in Parameters to create dropdown parameters of static values. Simple JSON arrays or objects will automatically be converted into grid rows and columns. For more specific behaviors, you can use the Results tab and JSONPath settings to configure columns.
-To make a query control use this data source, use the **Query type** drop-down to choose Workload Health and select subscription, resource group or VM resources to target. Use the health filter drop downs to select an interesting subset of health incidents for your analytic needs.
+> [!NOTE]
+> Do not include any sensitive information in any fields (headers, parameters, body, url), since they will be visible to all of the Workbook users.
-![Screenshot of alerts query](./media/workbooks-data-sources/workload-health.png)
+This provider supports [JSONPath](workbooks-jsonpath.md).
-## Azure resource health
+## Merge
-Workbooks support getting Azure resource health and combining it with other data sources to create rich, interactive health reports
+Merging data from different sources can enhance the insights experience. An example is augmenting active alert information with related metric data. This allows users to see not just the effect (an active alert), but also potential causes (for example, high CPU usage). The monitoring domain has numerous such correlatable data sources that are often critical to the triage and diagnostic workflow.
-To make a query control use this data source, use the **Query type** drop-down to choose Azure health and select the resources to target. Use the health filter drop downs to select an interesting subset of resource issues for your analytic needs.
+Workbooks allow not just the querying of different data sources, but also provides simple controls that allow you to merge or join the data to provide rich insights. The **merge** control is the way to achieve it.
-![Screenshot of alerts query that shows the health filter lists.](./media/workbooks-data-sources/resource-health.png)
+### Combining alerting data with Log Analytics VM performance data
-## Change Analysis (preview)
+The example below combines alerting data with Log Analytics VM performance data to get a rich insights grid.
-To make a query control using [Application Change Analysis](../app/change-analysis.md) as the data source, use the **Data source** drop-down and choose *Change Analysis (preview)* and select a single resource. Changes for up to the last 14 days can be shown. The *Level* drop-down can be used to filter between "Important", "Normal", and "Noisy" changes, and this drop down supports workbook parameters of type [drop down](workbooks-dropdowns.md).
+![Screenshot of a workbook with a merge control that combines alert and log analytics data.](./media/workbooks-data-sources/merge-control.png)
-> [!div class="mx-imgBorder"]
-> ![A screenshot of a workbook with Change Analysis](./media/workbooks-data-sources/change-analysis-data-source.png)
+### Using merge control to combine Azure Resource Graph and Log Analytics data
-## JSON
+Here is a tutorial on using the merge control to combine Azure Resource Graph and Log Analytics data:
-The JSON provider allows you to create a query result from static JSON content. It is most commonly used in Parameters to create dropdown parameters of static values. Simple JSON arrays or objects will automatically be converted into grid rows and columns. For more specific behaviors, you can use the Results tab and JSONPath settings to configure columns.
+[![Combining data from different sources in workbooks](https://img.youtube.com/vi/7nWP_YRzxHg/0.jpg)](https://www.youtube.com/watch?v=7nWP_YRzxHg "Video showing how to combine data from different sources in workbooks.")
-> [!NOTE]
-> Do not include any sensitive information in any fields (`headers`, `parameters`, `body`, `url`), since they will be visible to all of the Workbook users.
+Workbooks support these merges:
-This provider supports [JSONPath](workbooks-jsonpath.md).
+* Inner unique join
+* Full inner join
+* Full outer join
+* Left outer join
+* Right outer join
+* Left semi-join
+* Right semi-join
+* Left anti-join
+* Right anti-join
+* Union
+* Duplicate table
## Custom endpoint Workbooks support getting data from any external source. If your data lives outside Azure you can bring it to Workbooks by using this data source type.
-To make a query control use this data source, use the _Data source_ drop-down to choose _Custom Endpoint_. Provide the appropriate parameters such as `Http method`, `url`, `headers`, `url parameters` and/or `body`. Make sure your data source supports [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) otherwise the request will fail.
+To make a query control use this data source, use the **Data source** drop-down to choose **Custom Endpoint**. Provide the appropriate parameters such as **Http method**, **url**, **headers**, **url parameters**, and/or **body**. Make sure your data source supports [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) otherwise the request will fail.
-To avoid automatically making calls to untrusted hosts when using templates, the user needs to mark the used hosts as trusted. This can be done by either clicking on the _Add as trusted_ button, or by adding it as a trusted host in Workbook settings. These settings will be saved in [browsers that support IndexDb with web workers](https://caniuse.com/#feat=indexeddb).
+To avoid automatically making calls to untrusted hosts when using templates, the user needs to mark the used hosts as trusted. This can be done by either selecting the **Add as trusted** button, or by adding it as a trusted host in Workbook settings. These settings will be saved in [browsers that support IndexDb with web workers](https://caniuse.com/#feat=indexeddb).
This provider supports [JSONPath](workbooks-jsonpath.md).
+## Workload health
+
+Azure Monitor has functionality that proactively monitors the availability and performance of Windows or Linux guest operating systems. Azure Monitor models key components and their relationships, criteria for how to measure the health of those components, and which components alert you when an unhealthy condition is detected. Workbooks allow users to use this information to create rich interactive reports.
+
+To make a query control use this data source, use the **Query type** drop-down to choose Workload Health and select subscription, resource group or VM resources to target. Use the health filter drop downs to select an interesting subset of health incidents for your analytic needs.
+
+![Screenshot of alerts query.](./media/workbooks-data-sources/workload-health.png)
+
+## Azure resource health
+
+Workbooks support getting Azure resource health and combining it with other data sources to create rich, interactive health reports
+
+To make a query control use this data source, use the **Query type** drop-down to choose Azure health and select the resources to target. Use the health filter drop downs to select an interesting subset of resource issues for your analytic needs.
+
+![Screenshot of alerts query that shows the health filter lists.](./media/workbooks-data-sources/resource-health.png)
## Azure RBAC The Azure RBAC provider allows you to check permissions on resources. It is most commonly used in parameter to check if the correct RBAC are set up. A use case would be to create a parameter to check deployment permission and then notify the user if they don't have deployment permission. Simple JSON arrays or objects will automatically be converted into grid rows and columns or text with a 'hasPermission' column with either true or false. The permission is checked on each resource and then either 'or' or 'and' to get the result. The [operations or actions](../../role-based-access-control/resource-provider-operations.md) can be a string or an array.
The Azure RBAC provider allows you to check permissions on resources. It is most
``` ["Microsoft.Resources/deployments/read","Microsoft.Resources/deployments/write","Microsoft.Resources/deployments/validate/action","Microsoft.Resources/operations/read"] ```+
+## Change Analysis (preview)
+
+To make a query control using [Application Change Analysis](../app/change-analysis.md) as the data source, use the **Data source** drop-down and choose *Change Analysis (preview)* and select a single resource. Changes for up to the last 14 days can be shown. The *Level* drop-down can be used to filter between "Important", "Normal", and "Noisy" changes, and this drop down supports workbook parameters of type [drop down](workbooks-dropdowns.md).
+
+> [!div class="mx-imgBorder"]
+> ![A screenshot of a workbook with Change Analysis.](./media/workbooks-data-sources/change-analysis-data-source.png)
+ ## Next steps - [Getting started with Azure Workbooks](workbooks-getting-started.md)
azure-monitor Workbooks Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-getting-started.md
You can access Workbooks in a few ways:
:::image type="content" source="./media/workbooks-overview/workbooks-menu.png" alt-text="Screenshot of Workbooks icon in the menu."::: -- From a **Log Analytics workspace** page, select the **Workbooks** icon at the top of the page.
+- In a **Log Analytics workspace** page, select the **Workbooks** icon at the top of the page.
:::image type="content" source="media/workbooks-overview/workbooks-log-analytics-icon.png" alt-text="Screenshot of Workbooks icon on Log analytics workspace page."::: The gallery opens. Select a saved workbook or a template from the gallery, or search for the name in the search bar.
-## Start a new workbook
-To start a new workbook, select the **Empty** template under **Quick start**, or the **New** icon in the top navigation bar. For more information on creating new workbooks, see [Create a workbook](workbooks-create-workbook.md).
- ## Save a workbook To save a workbook, save the report with a specific title, subscription, resource group, and location. The workbook will autofill to the same settings as the LA workspace, with the same subscription, resource group, however, users may change these report settings. Workbooks are shared resources that require write access to the parent resource group to be saved.
azure-monitor Workbooks Grid Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-grid-visualizations.md
Title: Azure Monitor workbook grid visualizations description: Learn about all the Azure Monitor workbook grid visualizations. - Previously updated : 09/04/2020 Last updated : 06/22/2022+ # Grid visualizations
The example below shows a grid that combines icons, heatmaps, and spark-bars to
## Adding a log-based grid 1. Switch the workbook to edit mode by clicking on the **Edit** toolbar item.
-2. Use the **Add query** link to add a log query control to the workbook.
+2. Select **Add query** to add a log query control to the workbook.
3. Select the query type as **Log**, resource type (for example, Application Insights) and the resources to target. 4. Use the Query editor to enter the KQL for your analysis (for example, VMs with memory below a threshold) 5. Set the visualization to **Grid**
The example below shows a grid that combines icons, heatmaps, and spark-bars to
| Parameter | Explanation | Example | | - |:-|:-|
-| `Query Type` | The type of query to use. | Log, Azure Resource Graph, etc. |
-| `Resource Type` | The resource type to target. | Application Insights, Log Analytics, or Azure-first |
-| `Resources` | A set of resources to get the metrics value from. | MyApp1 |
-| `Time Range` | The time window to view the log chart. | Last hour, Last 24 hours, etc. |
-| `Visualization` | The visualization to use. | Grid |
-| `Size` | The vertical size of the control. | Small, medium, large, or full |
-| `Query` | Any KQL query that returns data in the format expected by the chart visualization. | _requests \| summarize Requests = count() by name_ |
+|Query Type| The type of query to use. | Log, Azure Resource Graph, etc. |
+|Resource Type| The resource type to target. | Application Insights, Log Analytics, or Azure-first |
+|Resources| A set of resources to get the metrics value from. | MyApp1 |
+|Time Range| The time window to view the log chart. | Last hour, Last 24 hours, etc. |
+|Visualization| The visualization to use. | Grid |
+|Size| The vertical size of the control. | Small, medium, large, or full |
+|Query| Any KQL query that returns data in the format expected by the chart visualization. | _requests \| summarize Requests = count() by name_ |
## Simple Grid
Here is the same grid styled as bars:
### Styling a grid column 1. Select the **Column Setting** button on the query control toolbar.
-2. In the *Edit column settings*, select the column to style.
+2. In the **Edit column settings**, select the column to style.
3. Choose a column renderer (for example heatmap, bar, bar underneath, etc.) and related settings to style your column.
-Below is an example that styles the *Request* column as a bar:
+Below is an example that styles the **Request** column as a bar:
[![Screenshot of a log based grid with request column styled as a bar.](./media/workbooks-grid-visualizations/log-chart-grid-column-settings-start.png)](./media/workbooks-grid-visualizations/log-chart-grid-column-settings-start.png#lightbox)
-### Column renderers
-
-| Column Renderer | Explanation | Additional Options |
-|:- |:-|:-|
-| `Automatic` | The default - uses the most appropriate renderer based on the column type. | |
-| `Text` | Renders the column values as text. | |
-| `Right Aligned` | Similar to text except that it is right aligned. | |
-| `Date/Time` | Renders a readable date time string. | |
-| `Heatmap` | Colors the grid cells based on the value of the cell. | Color palette and min/max value used for scaling. |
-| `Bar` | Renders a bar next to the cell based on the value of the cell. | Color palette and min/max value used for scaling. |
-| `Bar underneath` | Renders a bar near the bottom of the cell based on the value of the cell. | Color palette and min/max value used for scaling. |
-| `Composite bar` | Renders a composite bar using the specified columns in that row. Refer [Composite Bar](workbooks-composite-bar.md) for details. | Columns with corresponding colors to render the bar and a label to display at the top of the bar. |
-| `Spark bars` | Renders a spark bar in the cell based on the values of a dynamic array in the cell. For example, the Trend column from the screenshot at the top. | Color palette and min/max value used for scaling. |
-| `Spark lines` | Renders a spark line in the cell based on the values of a dynamic array in the cell. | Color palette and min/max value used for scaling. |
-| `Icon` | Renders icons based on the text values in the cell. Supported values include: `cancelled`, `critical`, `disabled`, `error`, `failed`, `info`, `none`, `pending`, `stopped`, `question`, `success`, `unknown`, `warning` `uninitialized`, `resource`, `up`, `down`, `left`, `right`, `trendup`, `trenddown`, `4`, `3`, `2`, `1`, `Sev0`, `Sev1`, `Sev2`, `Sev3`, `Sev4`, `Fired`, `Resolved`, `Available`, `Unavailable`, `Degraded`, `Unknown`, and `Blank`.| |
-| `Link` | Renders a link that when clicked or performs a configurable action. Use this if you *only* want the item to be a link. Any of the other types can *also* be a link by using the `Make this item a link` setting. For more information see [Link Actions](#link-actions) below. | |
-| `Location` | Renders a friendly Azure region name based on a region ids. | |
-| `Resource type` | Renders a friendly resource type string based on a resource type id | |
-| `Resource` | Renders a friendly resource name and link based on a resource id | Option to show the resource type icon |
-| `Resource group` | Renders a friendly resource group name and link based on a resource group id. If the value of the cell is not a resource group, it will be converted to one. | Option to show the resource group icon |
-| `Subscription` | Renders a friendly subscription name and link based on a subscription id. if the value of the cell is not a subscription, it will be converted to one. | Option to show the subscription icon. |
-| `Hidden` | Hides the column in the grid. Useful when the default query returns more columns than needed but a project-away is not desired | |
-
-### Link actions
-
-If the `Link` renderer is selected or the *Make this item a link* checkbox is selected, then the author can configure a link action that will occur on selecting the cell. THis usually is taking the user to some other view with context coming from the cell or may open up a url.
+This usually is taking the user to some other view with context coming from the cell or may open up a url.
### Custom formatting
-Workbooks also allows users to set the number formatting of their cell values. They can do so by clicking on the *Custom formatting* checkbox when available.
+Workbooks also allow users to set the number formatting of their cell values. They can do so by clicking on the **Custom formatting** checkbox when available.
| Formatting option | Explanation | |:- |:-|
-| `Units` | The units for the column - various options for percentage, counts, time, byte, count/time, bytes/time, etc. For example, the unit for a value of 1234 can be set to milliseconds and it's rendered as 1.234 s. |
-| `Style` | The format to render it as - decimal, currency, percent. |
-| `Show group separator` | Checkbox to show group separators. Renders 1234 as 1,234 in the US. |
-| `Minimum integer digits` | Minimum number of integer digits to use (default 1). |
-| `Minimum fractional digits` | Minimum number of fractional digits to use (default 0). |
-| `Maximum fractional digits` | Maximum number of fractional digits to use. |
-| `Minimum significant digits` | Minimum number of significant digits to use (default 1). |
-| `Maximum significant digits` | Maximum number of significant digits to use. |
-| `Custom text for missing values` | When a data point does not have a value, show this custom text instead of a blank. |
+|Units| The units for the column - various options for percentage, counts, time, byte, count/time, bytes/time, etc. For example, the unit for a value of 1234 can be set to milliseconds and it's rendered as 1.234 s. |
+|Style| The format to render it as - decimal, currency, percent. |
+|Show group separator| Checkbox to show group separators. Renders 1234 as 1,234 in the US. |
+|Minimum integer digits| Minimum number of integer digits to use (default 1). |
+|Minimum fractional digits| Minimum number of fractional digits to use (default 0). |
+|Maximum fractional digits| Maximum number of fractional digits to use. |
+|Minimum significant digits| Minimum number of significant digits to use (default 1). |
+|Maximum significant digits| Maximum number of significant digits to use. |
+|Custom text for missing values| When a data point does not have a value, show this custom text instead of a blank. |
### Custom date formatting
When the author has specified that a column is set to the Date/Time renderer, th
| Formatting option | Explanation | |:- |:-|
-| `Style` | The format to render a date as short, long, full formats, or a time as short or long time formats. |
-| `Show time as` | Allows the author to decide between showing the time in local time (default), or as UTC. Depending on the date format style selected, the UTC/time zone information may not be displayed. |
+|Style| The format to render a date as short, long, full formats, or a time as short or long time formats. |
+|Show time as| Allows the author to decide between showing the time in local time (default), or as UTC. Depending on the date format style selected, the UTC/time zone information may not be displayed. |
## Custom column width setting
-The author can customize the width of any column in the grid using the *Custom Column Width* field in *Column Settings*.
+The author can customize the width of any column in the grid using the **Custom Column Width** field in **Column Settings**.
![Screenshot of column settings with the custom column width field indicated in a red box](./media/workbooks-grid-visualizations/custom-column-width-setting.png)
requests
[![Screenshot of a log based grid with a heatmap having a shared scale across columns using the query above.](./media/workbooks-grid-visualizations/log-chart-grid-icons.png)](./media/workbooks-grid-visualizations/log-chart-grid-icons.png#lightbox) Supported icon names include:
-`cancelled`, `critical`, `disabled`, `error`, `failed`, `info`, `none`, `pending`, `stopped`, `question`, `success`, `unknown`, `warning` `uninitialized`, `resource`, `up`, `down`, `left`, `right`, `trendup`, `trenddown`, `4`, `3`, `2`, `1`, `Sev0`, `Sev1`, `Sev2`, `Sev3`, `Sev4`, `Fired`, `Resolved`, `Available`, `Unavailable`, `Degraded`, `Unknown`, and `Blank`.
+- cancelled
+- critical
+- disabled
+- error
+- failed
+- info
+- none
+- pending
+- stopped
+- question
+- success
+- unknown
+- warning
+- uninitialized
+- resource
+- up
+- down
+- left
+- right
+- trendup
+- trenddown
+- 4
+- 3
+- 2
+- 1
+- Sev0
+- Sev1
+- Sev2
+- Sev3
+- Sev4
+- Fired
+- Resolved
+- Available
+- Unavailable
+- Degraded
+- Unknown
+- Blank
-### Using thresholds with links
-
-The instructions below will show you how to use thresholds with links to assign icons and open different workbooks. Each link in the grid will open up a different workbook template for that Application Insights resource.
-
-1. Switch the workbook to edit mode by selecting *Edit* toolbar item.
-2. Select **Add** then *Add query*.
-3. Change the *Data source* to "JSON" and *Visualization* to "Grid".
-4. Enter the following query.
-
-```json
-[
- { "name": "warning", "link": "Community-Workbooks/Performance/Performance Counter Analysis" },
- { "name": "info", "link": "Community-Workbooks/Performance/Performance Insights" },
- { "name": "error", "link": "Community-Workbooks/Performance/Apdex" }
-]
-```
-
-5. Run query.
-6. Select **Column Settings** to open the settings.
-7. Select "name" from *Columns*.
-8. Under *Column renderer*, choose "Thresholds".
-9. Enter and choose the following *Threshold Settings*.
-
- | Operator | Value | Icons |
- |-|||
- | == | warning | Warning |
- | == | error | Failed |
-
- ![Screenshot of Edit column settings tab with the above settings.](./media/workbooks-grid-visualizations/column-settings.png)
-
- Keep the default row as is. You may enter whatever text you like. The Text column takes a String format as an input and populates it with the column value and unit if specified. For example, if warning is the column value the text can be "{0} {1} link!", it will be displayed as "warning link!".
-10. Select the *Make this item a link* box.
- 1. Under *View to open*, choose "Workbook (Template)".
- 2. Under *Link value comes from*, choose "link".
- 3. Select the *Open link in Context Blade* box.
- 4. Choose the following settings in *Workbook Link Settings*
- 1. Under *Template Id comes from*, choose "Column".
- 2. Under *Column* choose "link".
-
- ![Screenshot of link settings with the above settings.](./media/workbooks-grid-visualizations/make-this-item-a-link.png)
-
-11. Select "link" from *Columns*. Under Settings next to *Column renderer*, select **(Hide column)**.
-1. To change the display name of the "name" column select the **Labels** tab. On the row with "name" as its *Column ID*, under *Column Label enter the name you want displayed.
-2. Select **Apply**
-
-![Screenshot of a thresholds in grid with the above settings](./media/workbooks-grid-visualizations/thresholds-workbooks-links.png)
## Fractional units percentages
The image below shows the same table, except the first column is set to 50% widt
Combining fr, %, px, and ch widths is possible and works similarly to the previous examples. The widths that are set by the static units (ch and px) are hard constants that won't change even if the window/resolution is changed. The columns set by % will take up their percentage based on the total grid width (might not be exact due to previously minimum widths). The columns set with fr will just split up the remaining grid space based on the number of fractional units they are allotted. [![Screenshot of columns in grid with assortment of different width units used](./media/workbooks-grid-visualizations/custom-column-width-fr3.png)](./media/workbooks-grid-visualizations/custom-column-width-fr3.png#lightbox)-
-## Next steps
-
-* Learn how to create a [tree in workbooks](workbooks-tree-visualizations.md).
-* Learn how to create [workbook text parameters](workbooks-text.md).
azure-monitor Workbooks Link Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-link-actions.md
Title: Azure Monitor Workbooks link actions description: How to use link actions in Azure Monitor Workbooks Previously updated : 01/07/2021 Last updated : 06/23/2022++ # Link actions
Link actions can be accessed through Workbook link steps or through column setti
| Link action | Action on click | |:- |:-|
-| `Generic Details` | Shows the row values in a property grid context view. |
-| `Cell Details` | Shows the cell value in a property grid context view. Useful when the cell contains a dynamic type with information (for example, json with request properties like location, role instance, etc.). |
-| `Url` | The value of the cell is expected to be a valid http url, and the cell will be a link that opens up that url in a new tab.|
+|Generic Details| Shows the row values in a property grid context view. |
+|Cell Details| Shows the cell value in a property grid context view. Useful when the cell contains a dynamic type with information (for example, json with request properties like location, role instance, etc.). |
+|Url| The value of the cell is expected to be a valid http url, and the cell will be a link that opens up that url in a new tab.|
## Application Insights | Link action | Action on click | |:- |:-|
-| `Custom Event Details` | Opens the Application Insights search details with the custom event ID (`itemId`) in the cell. |
-| `* Details` | Similar to Custom Event Details, except for dependencies, exceptions, page views, requests, and traces. |
-| `Custom Event User Flows` | Opens the Application Insights User Flows experience pivoted on the custom event name in the cell. |
-| `* User Flows` | Similar to Custom Event User Flows except for exceptions, page views and requests. |
-| `User Timeline` | Opens the user timeline with the user ID (`user_Id`) in the cell. |
-| `Session Timeline` | Opens the Application Insights search experience for the value in the cell (for example, search for text 'abc' where abc is the value in the cell). |
-
-`*` denotes a wildcard for the above table
+|Custom Event Details| Opens the Application Insights search details with the custom event ID ("itemId") in the cell. |
+|Details| Similar to Custom Event Details, except for dependencies, exceptions, page views, requests, and traces. |
+|Custom Event User Flows| Opens the Application Insights User Flows experience pivoted on the custom event name in the cell. |
+|User Flows| Similar to Custom Event User Flows except for exceptions, page views and requests. |
+|User Timeline| Opens the user timeline with the user ID ("user_Id") in the cell. |
+|Session Timeline| Opens the Application Insights search experience for the value in the cell (for example, search for text 'abc' where abc is the value in the cell). |
## Azure resource | Link action | Action on click | |:- |:-|
-| `ARM Deployment` | Deploy an Azure Resource Manager template. When this item is selected, additional fields are displayed to let the author configure which Azure Resource Manager template to open, parameters for the template, etc. [See Azure Resource Manager deployment link settings](#azure-resource-manager-deployment-link-settings). |
-| `Create Alert Rule` | Creates an Alert rule for a resource. |
-| `Custom View` | Opens a custom View. When this item is selected, additional fields are displayed to let the author configure the View extension, View name, and any parameters used to open the View. [See custom view](#custom-view-link-settings). |
-| `Metrics` | Opens a metrics view. |
-| `Resource overview` | Open the resource's view in the portal based on the resource ID value in the cell. The author can also optionally set a `submenu` value that will open up a specific menu item in the resource view. |
-| `Workbook (template)` | Open a workbook template. When this item is selected, additional fields are displayed to let the author configure what template to open, etc. |
+|ARM Deployment| Deploy an Azure Resource Manager template. When this item is selected, additional fields are displayed to let the author configure which Azure Resource Manager template to open, parameters for the template, etc. [See Azure Resource Manager deployment link settings](#azure-resource-manager-deployment-link-settings). |
+|Create Alert Rule| Creates an Alert rule for a resource. |
+|Custom View| Opens a custom View. When this item is selected, additional fields are displayed to let the author configure the View extension, View name, and any parameters used to open the View. [See custom view](#custom-view-link-settings). |
+|Metrics| Opens a metrics view. |
+|Resource overview| Open the resource's view in the portal based on the resource ID value in the cell. The author can also optionally set a submenu value that will open up a specific menu item in the resource view. |
+|Workbook (template)| Open a workbook template. When this item is selected, additional fields are displayed to let the author configure what template to open, etc. |
## Link settings
When using the link renderer, the following settings are available:
| Setting | Explanation | |:- |:-|
-| `View to open` | Allows the author to select one of the actions enumerated above. |
-| `Menu item` | If "Resource Overview" is selected, this is the menu item in the resource's overview to open. This can be used to open alerts or activity logs instead of the "overview" for the resource. Menu item values are different for each Azure `Resourcetype`.|
-| `Link label` | If specified, this value will be displayed in the grid column. If this value is not specified, the value of the cell will be displayed. If you want another value to be displayed, like a heatmap or icon, do not use the `Link` renderer, instead use the appropriate renderer and select the `Make this item a link` option. |
-| `Open link in Context Blade` | If specified, the link will be opened as a popup "context" view on the right side of the window instead of opening as a full view. |
+|View to open| Allows the author to select one of the actions enumerated above. |
+|Menu item| If "Resource Overview" is selected, this is the menu item in the resource's overview to open. This can be used to open alerts or activity logs instead of the "overview" for the resource. Menu item values are different for each Azure Resource type.|
+|Link label| If specified, this value will be displayed in the grid column. If this value is not specified, the value of the cell will be displayed. If you want another value to be displayed, like a heatmap or icon, do not use the link renderer, instead use the appropriate renderer and select the **Make this item a link** option. |
+|Open link in Context Blade| If specified, the link will be opened as a popup "context" view on the right side of the window instead of opening as a full view. |
-When using the `Make this item a link` option, the following settings are available:
+When using the **Make this item a link** option, the following settings are available:
| Setting | Explanation | |:- |:-|
-| `Link value comes from` | When displaying a cell as a renderer with a link, this field specifies where the "link" value to be used in the link comes from, allowing the author to select from a dropdown of the other columns in the grid. For example, the cell may be a heatmap value, but you want the link to open up the Resource Overview for the resource ID in the row. In that case, you'd set the link value to come from the `Resource Id` field.
-| `View to open` | same as above. |
-| `Menu item` | same as above. |
-| `Open link in Context Blade` | same as above. |
+|Link value comes from| When displaying a cell as a renderer with a link, this field specifies where the "link" value to be used in the link comes from, allowing the author to select from a dropdown of the other columns in the grid. For example, the cell may be a heatmap value, but you want the link to open up the Resource Overview for the resource ID in the row. In that case, you'd set the link value to come from the **Resource ID** field.
+|View to open| Same as above. |
+|Menu item| Same as above. |
+|Open link in Context Blade| Same as above. |
## Azure Resource Manager deployment link settings
-If the selected link type is `ARM Deployment` the author must specify additional settings to open an Azure Resource Manager deployment. There are two main tabs for configurations.
+If the selected link type is **ARM Deployment** the author must specify additional settings to open an Azure Resource Manager deployment. There are two main tabs for configurations.
### Template settings
This section defines where the template should come from and the parameters used
| Source | Explanation | |:- |:-|
-|`Resource group id comes from` | The resource ID is used to manage deployed resources. The subscription is used to manage deployed resources and costs. The resource groups are used like folders to organize and manage all your resources. If this value is not specified, the deployment will fail. Select from `Cell`, `Column`, `Static Value`, or `Parameter` in [link sources](#link-sources).|
-|`ARM template URI from` | The URI to the Azure Resource Manager template itself. The template URI needs to be accessible to the users who will deploy the template. Select from `Cell`, `Column`, `Parameter`, or `Static Value` in [link sources](#link-sources). For starters, take a look at our [quickstart templates](https://azure.microsoft.com/resources/templates/).|
-|`ARM Template Parameters` | This section defines the template parameters used for the template URI defined above. These parameters will be used to deploy the template on the run page. The grid contains an expand toolbar button to help fill the parameters using the names defined in the template URI and set it to static empty values. This option can only be used when there are no parameters in the grid and the template URI has been set. The lower section is a preview of what the parameter output looks like. Select Refresh to update the preview with current changes. Parameters are typically values, whereas references are something that could point to keyvault secrets that the user has access to. <br/><br/> **Template Viewer blade limitation** - does not render reference parameters correctly and will show up as null/value, thus users will not be able to correctly deploy reference parameters from Template Viewer tab.|
+|Resource group id comes from| The resource ID is used to manage deployed resources. The subscription is used to manage deployed resources and costs. The resource groups are used like folders to organize and manage all your resources. If this value is not specified, the deployment will fail. Select from: Cell, Column, Static Value, or Parameter in [link sources](#link-sources).|
+|ARM template URI from| The URI to the Azure Resource Manager template itself. The template URI needs to be accessible to the users who will deploy the template. Select from: Cell, Column, Parameter, or Static Value in [link sources](#link-sources). For starters, take a look at our [quickstart templates](https://azure.microsoft.com/resources/templates/).|
+|ARM Template Parameters|Defines the template parameters used for the template URI defined above. These parameters will be used to deploy the template on the run page. The grid contains an expand toolbar button to help fill the parameters using the names defined in the template URI and set it to static empty values. This option can only be used when there are no parameters in the grid and the template URI has been set. The lower section is a preview of what the parameter output looks like. Select Refresh to update the preview with current changes. Parameters are typically values, whereas references are something that could point to key vault secrets that the user has access to. <br/><br/> **Template Viewer blade limitation** - does not render reference parameters correctly and will show up as null/value, thus users will not be able to correctly deploy reference parameters from Template Viewer tab.|
![Screenshot of Azure Resource Manager template settings](./media/workbooks-link-actions/template-settings.png)
This section configures what the users will see before they run the Azure Resour
| Source | Explanation | |:- |:-|
-|`Title from` | Title used on the run view. Select from `Cell`, `Column`, `Parameter`, or `Static Value` in [link sources](#link-sources).|
-|`Description from` | This is the markdown text used to provide a helpful description to users when they want to deploy the template. Select from `Cell`, `Column`, `Parameter`, or `Static Value` in [link sources](#link-sources). <br/><br/> **NOTE:** If `Static Value` is selected, a multi-line text box will appear. In this text box, you can resolve parameters using `{paramName}`. Also you can treat columns as parameters by appending `_column` after the column name like `{columnName_column}`. In the example image below, we can reference the column `VMName` by writing `{VMName_column}`. The value after the colon is the [parameter formatter](../visualize/workbooks-parameters.md#parameter-options), in this case it is `value`.|
-|`Run button text from` | Label used on the run (execute) button to deploy the Azure Resource Manager template. This is what users will select to start deploying the Azure Resource Manager template.|
+|Title from| Title used on the run view. Select from: Cell, Column, Parameter, or Static Value in [link sources](#link-sources).|
+|Description from| The markdown text used to provide a helpful description to users when they want to deploy the template. Select from: Cell, Column, Parameter, or Static Value in [link sources](#link-sources). <br/><br/> **NOTE:** If **Static Value** is selected, a multi-line text box will appear. In this text box, you can resolve parameters using "{paramName}". Also you can treat columns as parameters by appending "_column" after the column name like {columnName_column}. In the example image below, we can reference the column "VMName" by writing "{VMName_column}". The value after the colon is the [parameter formatter](../visualize/workbooks-parameters.md#parameter-options), in this case it is **value**.|
+|Run button text from| Label used on the run (execute) button to deploy the Azure Resource Manager template. This is what users will select to start deploying the Azure Resource Manager template.|
![Screenshot of Azure Resource Manager UX settings](./media/workbooks-link-actions/ux-settings.png)
-After these configurations are set, when the users select the link, it will open up the view with the UX described in the UX settings. If the user selects `Run button text from` it will deploy an Azure Resource Manager template using the values from [template settings](#template-settings). View template will open up the template viewer tab for the user to examine the template and the parameters before deploying.
+After these configurations are set, when the users select the link, it will open up the view with the UX described in the UX settings. If the user selects **Run button text from** it will deploy an Azure Resource Manager template using the values from [template settings](#template-settings). View template will open up the template viewer tab for the user to examine the template and the parameters before deploying.
![Screenshot of run Azure Resource Manager view](./media/workbooks-link-actions/run-tab.png) ## Custom view link settings
-Use this to open Custom Views in the Azure portal. Verify all of the configuration and settings. Incorrect values will cause errors in the portal or fail to open the views correctly. There are two ways to configure the settings via the `Form` or `URL`.
+Use this to open Custom Views in the Azure portal. Verify all of the configuration and settings. Incorrect values will cause errors in the portal or fail to open the views correctly. There are two ways to configure the settings via the form or URL.
> [!NOTE] > Views with a menu cannot be opened in a context tab. If a view with a menu is configured to open in a context tab then no context tab will be shown when the link is selected.
Use this to open Custom Views in the Azure portal. Verify all of the configurati
| Source | Explanation | |:- |:-|
-|`Extension name` | The name of the extension that hosts the name of the View.|
-|`View name` | The name of the View to open.|
+|Extension name| The name of the extension that hosts the name of the View.|
+|View name| The name of the View to open.|
#### View inputs
-There are two types of inputs, grids and JSON. Use `Grid` for simple key and value tab inputs or select `JSON` to specify a nested JSON input.
+There are two types of inputs, grids and JSON. Use grid for simple key and value tab inputs or select JSON to specify a nested JSON input.
- Grid
- - `Parameter Name`: The name of the View input parameter.
- - `Parameter Comes From`: Where the value of the View parameter should come from. Select from `Cell`, `Column`, `Parameter`, or `Static Value` in [link sources](#link-sources).
+ - **Parameter Name**: The name of the View input parameter.
+ - **Parameter Comes From**: Where the value of the View parameter should come from. Select from: Cell, Column, Parameter, or Static Value in [link sources](#link-sources).
> [!NOTE]
- > If `Static Value` is selected, the parameters can be resolved using brackets link `{paramName}` in the text box. Columns can be treated as parameters columns by appending `_column` after the column name like `{columnName_column}`.
+ > If **Static Value** is selected, the parameters can be resolved using brackets link "{paramName}" in the text box. Columns can be treated as parameters columns by appending `_column` after the column name like "{columnName_column}".
- - `Parameter Value`: depending on `Parameter Comes From`, this will be a dropdown of available parameters, columns, or a static value.
+ - **Parameter Value**: depending on `Parameter Comes From`, this will be a dropdown of available parameters, columns, or a static value.
![Screenshot of edit column setting show Custom View settings from form.](./media/workbooks-link-actions/custom-tab-settings.png) - JSON
If the selected link type is `Workbook (Template)`, the author must specify addi
| Setting | Explanation | |:- |:-|
-| `Workbook owner Resource Id` | This is the Resource ID of the Azure Resource that "owns" the workbook. Commonly it is an Application Insights resource, or a Log Analytics Workspace. Inside of Azure Monitor, this may also be the literal string `"Azure Monitor"`. When the workbook is Saved, this is what the workbook will be linked to. |
-| `Workbook resources` | An array of Azure Resource Ids that specify the default resource used in the workbook. For example, if the template being opened shows Virtual Machine metrics, the values here would be Virtual Machine resource IDs. Many times, the owner, and resources are set to the same settings. |
-| `Template Id` | Specify the ID of the template to be opened. If this is a community template from the gallery (the most common case), prefix the path to the template with `Community-`, like `Community-Workbooks/Performance/Apdex` for the `Workbooks/Performance/Apdex` template. If this is a link to a saved workbook/template, it is the full Azure resource ID of that item. |
-| `Workbook Type` | Specify the kind of workbook template to open. The most common cases use the `Default` or `Workbook` option to use the value in the current workbook. |
-| `Gallery Type` | This specifies the gallery type that will be displayed in the "Gallery" view of the template that opens. The most common cases use the `Default` or `Workbook` option to use the value in the current workbook. |
-|`Location comes from` | The location field should be specified if you are opening a specific Workbook resource. If location is not specified, finding the workbook content is much slower. If you know the location, specify it. If you do not know the location or are opening a template that with no specific location, leave this field as "Default".|
-|`Pass specific parameters to template` | Select to pass specific parameters to the template. If selected, only the specified parameters are passed to the template else all the parameters in the current workbook are passed to the template and in that case the parameter *names* must be the same in both workbooks for this parameter value to work.|
-|`Workbook Template Parameters` | This section defines the parameters that are passed to the target template. The name should match with the name of the parameter in the target template. Select value from `Cell`, `Column`, `Parameter`, and `Static Value`. Name and value must not be empty to pass that parameter to the target template.|
+|Workbook owner Resource Id| This is the Resource ID of the Azure Resource that "owns" the workbook. Commonly it is an Application Insights resource, or a Log Analytics Workspace. Inside of Azure Monitor, this may also be the literal string "Azure Monitor". When the workbook is saved, this is what the workbook will be linked to. |
+|Workbook resources| An array of Azure Resource Ids that specify the default resource used in the workbook. For example, if the template being opened shows Virtual Machine metrics, the values here would be Virtual Machine resource IDs. Many times, the owner, and resources are set to the same settings. |
+|Template Id| Specify the ID of the template to be opened. If this is a community template from the gallery (the most common case), prefix the path to the template with `Community-`, like `Community-Workbooks/Performance/Apdex` for the `Workbooks/Performance/Apdex` template. If this is a link to a saved workbook/template, it is the full Azure resource ID of that item. |
+|Workbook Type| Specify the kind of workbook template to open. The most common cases use the default or workbook option to use the value in the current workbook. |
+|Gallery Type| This specifies the gallery type that will be displayed in the "Gallery" view of the template that opens. The most common cases use the default or workbook option to use the value in the current workbook. |
+|Location comes from| The location field should be specified if you are opening a specific Workbook resource. If location is not specified, finding the workbook content is much slower. If you know the location, specify it. If you do not know the location or are opening a template that with no specific location, leave this field as "Default".|
+|Pass specific parameters to template| Select to pass specific parameters to the template. If selected, only the specified parameters are passed to the template else all the parameters in the current workbook are passed to the template and in that case the parameter *names* must be the same in both workbooks for this parameter value to work.|
+|Workbook Template Parameters| This section defines the parameters that are passed to the target template. The name should match with the name of the parameter in the target template. Select value from: Cell, Column, Parameter, and Static Value. Name and value must not be empty to pass that parameter to the target template.|
For each of the above settings, the author must pick where the value in the linked workbook will come from. See [link Sources](#link-sources)
When the workbook link is opened, the new workbook view will be passed all of th
| Source | Explanation | |:- |:-|
-| `Cell` | This will use the value in that cell in the grid as the link value |
-| `Column` | When selected, another field will be displayed to let the author select another column in the grid. The value of that column for the row will be used in the link value. This is commonly used to enable each row of a grid to open a different template, by setting `Template Id` field to `column`, or to open up the same workbook template for different resources, if the `Workbook resources` field is set to a column that contains an Azure Resource ID |
-| `Parameter` | When selected, another field will be displayed to let the author select a parameter. The value of that parameter will be used for the value when the link is clicked |
-| `Static value` | When selected, another field will be displayed to let the author type in a static value that will be used in the linked workbook. This is commonly used when all of the rows in the grid will use the same value for a field. |
-| `Step` | Use the value set in the current step of the workbook. This is common in query and metrics steps to set the workbook resources in the linked workbook to those used *in the query/metrics step*, not the current workbook |
-| `Workbook` | Use the value set in the current workbook. |
-| `Default` | Use the default value that would be used if no value was specified. This is common for Gallery Type, where the default gallery would be set by the type of the owner resource |
-
-## Next steps
-
+|Cell| This will use the value in that cell in the grid as the link value. |
+|Column| When selected, another field will be displayed to let the author select another column in the grid. The value of that column for the row will be used in the link value. This is commonly used to enable each row of a grid to open a different template, by setting the **Template Id** field to **column**, or to open up the same workbook template for different resources, if the **Workbook resources** field is set to a column that contains an Azure Resource ID. |
+|Parameter| When selected, another field will be displayed to let the author select a parameter. The value of that parameter will be used for the value when the link is clicked |
+|Static value| When selected, another field will be displayed to let the author type in a static value that will be used in the linked workbook. This is commonly used when all of the rows in the grid will use the same value for a field. |
+|Step| Use the value set in the current step of the workbook. This is common in query and metrics steps to set the workbook resources in the linked workbook to those used *in the query/metrics step*, not the current workbook. |
+|Workbook| Use the value set in the current workbook. |
+|Default| Use the default value that would be used if no value was specified. This is common for Gallery Type, where the default gallery would be set by the type of the owner resource. |
azure-monitor Workbooks Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-parameters.md
Last updated 10/23/2019
-# Creating Workbook parameters
+# Workbook parameters
Parameters allow workbook authors to collect input from the consumers and reference it in other parts of the workbook ΓÇô usually to scope the result set or setting the right visual. It is a key capability that allows authors to build interactive reports and experiences.
format | result
> If the parameter value is not valid json, the result of the format will be an empty value. ## Parameter Style
-The following styles are available to layout the parameters in a parameters step
+The following styles are available for the parameters.
#### Pills In pills style, the default style, the parameters look like text, and require the user to select them once to go into the edit mode.
azure-monitor Workbooks Renderers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-renderers.md
+
+ Title: Azure Workbook rendering options
+description: Learn about all the Azure Monitor workbook rendering options.
++ Last updated : 06/22/2022+++
+# Rendering options
+These rendering options can be used with grids, tiles, and graphs to produce the visualizations in optimal format.
+## Column renderers
+
+| Column Renderer | Explanation | More Options |
+|:- |:-|:-|
+| Automatic | The default - uses the most appropriate renderer based on the column type. | |
+| Text| Renders the column values as text. | |
+| Right Aligned| Renders the column values as right-aligned text. | |
+| Date/Time| Renders a readable date time string. | |
+| Heatmap| Colors the grid cells based on the value of the cell. | Color palette and min/max value used for scaling. |
+| Bar| Renders a bar next to the cell based on the value of the cell. | Color palette and min/max value used for scaling. |
+| Bar underneath | Renders a bar near the bottom of the cell based on the value of the cell. | Color palette and min/max value used for scaling. |
+| Composite bar| Renders a composite bar using the specified columns in that row. Refer [Composite Bar](workbooks-composite-bar.md) for details. | Columns with corresponding colors to render the bar and a label to display at the top of the bar. |
+|Spark bars| Renders a spark bar in the cell based on the values of a dynamic array in the cell. For example, the Trend column from the screenshot at the top. | Color palette and min/max value used for scaling. |
+|Spark lines| Renders a spark line in the cell based on the values of a dynamic array in the cell. | Color palette and min/max value used for scaling. |
+|Icon| Renders icons based on the text values in the cell. Supported values include:<br><ul><li>canceled</li><li>critical</li><li>disabled</li><li>error</li><li>failed</li> <li>info</li><li>none</li><li>pending</li><li>stopped</li><li>question</li><li>success</li><li>unknown</li><li>warning</li><li>uninitialized</li><li>resource</li><li>up</li> <li>down</li><li>left</li><li>right</li><li>trendup</li><li>trenddown</li><li>4</li><li>3</li><li>2</li><li>1</li><li>Sev0</li><li>Sev1</li><li>Sev2</li><li>Sev3</li><li>Sev4</li><li>Fired</li><li>Resolved</li><li>Available</li><li>Unavailable</li><li>Degraded</li><li>Unknown</li><li>Blank</li></ul>| |
+| Link | Renders a link that when clicked or performs a configurable action. Use this setting if you **only** want the item to be a link. Any of the other types of renderers can also be a link by using the **Make this item a link** setting. For more information, see [Link Actions](#link-actions). | |
+| Location | Renders a friendly Azure region name based on a region ID. | |
+| Resource type | Renders a friendly resource type string based on a resource type ID. | |
+| Resource| Renders a friendly resource name and link based on a resource ID. | Option to show the resource type icon |
+| Resource group | Renders a friendly resource group name and link based on a resource group ID. If the value of the cell is not a resource group, it will be converted to one. | Option to show the resource group icon |
+|Subscription| Renders a friendly subscription name and link based on a subscription ID. if the value of the cell is not a subscription, it will be converted to one. | Option to show the subscription icon. |
+|Hidden| Hides the column in the grid. Useful when the default query returns more columns than needed but a project-away is not desired | |
+
+## Link actions
+
+If the **Link** renderer is selected or the **Make this item a link** checkbox is selected, the author can configure a link action to occur when the user selects the cell to taking the user to another view with context coming from the cell, or to open up a url.
+
+## Using thresholds with links
+
+The instructions below will show you how to use thresholds with links to assign icons and open different workbooks. Each link in the grid will open up a different workbook template for that Application Insights resource.
+
+1. Switch the workbook to edit mode by selecting **Edit** toolbar item.
+1. Select **Add** then **Add query**.
+1. Change the **Data source** to "JSON" and **Visualization** to "Grid".
+1. Enter this query.
+
+ ```json
+ [
+ { "name": "warning", "link": "Community-Workbooks/Performance/Performance Counter Analysis" },
+ { "name": "info", "link": "Community-Workbooks/Performance/Performance Insights" },
+ { "name": "error", "link": "Community-Workbooks/Performance/Apdex" }
+ ]
+ ```
+
+1. Run query.
+1. Select **Column Settings** to open the settings.
+1. Select "name" from **Columns**.
+1. Under **Column renderer**, choose "Thresholds".
+1. Enter and choose the following **Threshold Settings**.
+
+ Keep the default row as is. You may enter whatever text you like. The Text column takes a String format as an input and populates it with the column value and unit if specified. For example, if warning is the column value the text can be "{0} {1} link!", it will be displayed as "warning link!".
+
+ | Operator | Value | Icons |
+ |-|||
+ | == | warning | Warning |
+ | == | error | Failed |
+
+ ![Screenshot of Edit column settings tab with the above settings.](./media/workbooks-grid-visualizations/column-settings.png)
+
+1. Select the **Make this item a link** box.
+ - Under **View to open**, choose **Workbook (Template)**.
+ - Under **Link value comes from**, choose **link**.
+ - Select the **Open link in Context Blade** box.
+ - Choose the following settings in **Workbook Link Settings**
+ - Under **Template Id comes from**, choose **Column**.
+ - Under **Column** choose **link**.
+
+ ![Screenshot of link settings with the above settings.](./media/workbooks-grid-visualizations/make-this-item-a-link.png)
+
+1. Select **link** from **Columns**. Under **Settings**, next to **Column renderer**, select **(Hide column)**.
+1. To change the display name of the **name** column, select the **Labels** tab. On the row with **name** as its **Column ID**, under **Column Label** enter the name you want displayed.
+1. Select **Apply**.
+
+ ![Screenshot of a thresholds in grid with the above settings.](./media/workbooks-grid-visualizations/thresholds-workbooks-links.png)
azure-monitor Workbooks Sample Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-sample-links.md
+
+ Title: Sample Azure Workbooks with links
+description: See sample Azure Workbooks.
++++ Last updated : 05/30/2022+++
+# Sample Azure Workbooks with links
+This article includes sample Azure Workbooks.
+
+## Sample workbook with links
+
+```json
+{
+ "version": "Notebook/1.0",
+ "items": [
+ {
+ "type": 11,
+ "content": {
+ "version": "LinkItem/1.0",
+ "style": "tabs",
+ "links": [
+ {
+ "cellValue": "selectedTab",
+ "linkTarget": "parameter",
+ "linkLabel": "Tab 1",
+ "subTarget": "1",
+ "style": "link"
+ },
+ {
+ "cellValue": "selectedTab",
+ "linkTarget": "parameter",
+ "linkLabel": "Tab 2",
+ "subTarget": "2",
+ "style": "link"
+ }
+ ]
+ },
+ "name": "links - 0"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "# selectedTab is {selectedTab}\r\n(this text step is always visible, but shows the value of the `selectedtab` parameter)"
+ },
+ "name": "always visible text"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "## content only visible when `selectedTab == 1`"
+ },
+ "conditionalVisibility": {
+ "parameterName": "selectedTab",
+ "comparison": "isEqualTo",
+ "value": "1"
+ },
+ "name": "selectedTab 1 text"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "## content only visible when `selectedTab == 2`"
+ },
+ "conditionalVisibility": {
+ "parameterName": "selectedTab",
+ "comparison": "isEqualTo",
+ "value": "2"
+ },
+ "name": "selectedTab 2"
+ }
+ ],
+ "styleSettings": {},
+ "$schema": "https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json"
+}
+```
+
+## Sample workbook with toolbar links
+
+```json
+{
+ "version": "Notebook/1.0",
+ "items": [
+ {
+ "type": 9,
+ "content": {
+ "version": "KqlParameterItem/1.0",
+ "crossComponentResources": [
+ "value::all"
+ ],
+ "parameters": [
+ {
+ "id": "eddb3313-641d-4429-b467-4793d6ed3575",
+ "version": "KqlParameterItem/1.0",
+ "name": "Subscription",
+ "type": 6,
+ "isRequired": true,
+ "multiSelect": true,
+ "quote": "'",
+ "delimiter": ",",
+ "query": "where type =~ \"microsoft.web/sites\"\r\n| summarize count() by subscriptionId\r\n| order by subscriptionId asc\r\n| project subscriptionId, label=subscriptionId, selected=row_number()==1",
+ "crossComponentResources": [
+ "value::all"
+ ],
+ "value": [],
+ "typeSettings": {
+ "additionalResourceOptions": [],
+ "showDefault": false
+ },
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 1,
+ "resourceType": "microsoft.resourcegraph/resources"
+ },
+ {
+ "id": "8beaeea6-9550-4574-a51e-bb0c16e68e84",
+ "version": "KqlParameterItem/1.0",
+ "name": "site",
+ "type": 5,
+ "description": "global parameter set by selection in the grid",
+ "isRequired": true,
+ "isGlobal": true,
+ "query": "where type =~ \"microsoft.web/sites\"\r\n| project id",
+ "crossComponentResources": [
+ "{Subscription}"
+ ],
+ "isHiddenWhenLocked": true,
+ "typeSettings": {
+ "additionalResourceOptions": [],
+ "showDefault": false
+ },
+ "timeContext": {
+ "durationMs": 86400000
+ },
+ "queryType": 1,
+ "resourceType": "microsoft.resourcegraph/resources"
+ },
+ {
+ "id": "0311fb5a-33ca-48bd-a8f3-d3f57037a741",
+ "version": "KqlParameterItem/1.0",
+ "name": "properties",
+ "type": 1,
+ "isRequired": true,
+ "isGlobal": true,
+ "isHiddenWhenLocked": true
+ }
+ ],
+ "style": "above",
+ "queryType": 1,
+ "resourceType": "microsoft.resourcegraph/resources"
+ },
+ "name": "parameters - 0"
+ },
+ {
+ "type": 11,
+ "content": {
+ "version": "LinkItem/1.0",
+ "style": "toolbar",
+ "links": [
+ {
+ "id": "7ea6a29e-fc83-40cb-a7c8-e57f157b1811",
+ "cellValue": "{site}",
+ "linkTarget": "ArmAction",
+ "linkLabel": "Start",
+ "postText": "Start website {site:name}",
+ "style": "primary",
+ "icon": "Start",
+ "linkIsContextBlade": true,
+ "armActionContext": {
+ "pathSource": "static",
+ "path": "{site}/start",
+ "headers": [],
+ "params": [
+ {
+ "key": "api-version",
+ "value": "2019-08-01"
+ }
+ ],
+ "isLongOperation": false,
+ "httpMethod": "POST",
+ "titleSource": "static",
+ "title": "Start {site:name}",
+ "descriptionSource": "static",
+ "description": "Attempt to start:\n\n{site:grid}",
+ "resultMessage": "Start {site:name} completed",
+ "runLabelSource": "static",
+ "runLabel": "Start"
+ }
+ },
+ {
+ "id": "676a0860-6ec8-4c4f-a3b8-a98af422ae47",
+ "cellValue": "{site}",
+ "linkTarget": "ArmAction",
+ "linkLabel": "Stop",
+ "postText": "Stop website {site:name}",
+ "style": "primary",
+ "icon": "Stop",
+ "linkIsContextBlade": true,
+ "armActionContext": {
+ "pathSource": "static",
+ "path": "{site}/stop",
+ "headers": [],
+ "params": [
+ {
+ "key": "api-version",
+ "value": "2019-08-01"
+ }
+ ],
+ "isLongOperation": false,
+ "httpMethod": "POST",
+ "titleSource": "static",
+ "title": "Stop {site:name}",
+ "descriptionSource": "static",
+ "description": "# Attempt to Stop:\n\n{site:grid}",
+ "resultMessage": "Stop {site:name} completed",
+ "runLabelSource": "static",
+ "runLabel": "Stop"
+ }
+ },
+ {
+ "id": "5e48925f-f84f-4a2d-8e69-6a4deb8a3007",
+ "cellValue": "{properties}",
+ "linkTarget": "CellDetails",
+ "linkLabel": "Properties",
+ "postText": "View the properties for Start website {site:name}",
+ "style": "secondary",
+ "icon": "Properties",
+ "linkIsContextBlade": true
+ }
+ ]
+ },
+ "name": "site toolbar"
+ },
+ {
+ "type": 3,
+ "content": {
+ "version": "KqlItem/1.0",
+ "query": "where type =~ \"microsoft.web/sites\"\r\n| extend properties=tostring(properties)\r\n| project-away name, type",
+ "size": 0,
+ "showAnalytics": true,
+ "title": "Web Apps in Subscription {Subscription:name}",
+ "showRefreshButton": true,
+ "exportedParameters": [
+ {
+ "fieldName": "id",
+ "parameterName": "site"
+ },
+ {
+ "fieldName": "",
+ "parameterName": "properties",
+ "parameterType": 1
+ }
+ ],
+ "showExportToExcel": true,
+ "queryType": 1,
+ "resourceType": "microsoft.resourcegraph/resources",
+ "crossComponentResources": [
+ "{Subscription}"
+ ],
+ "gridSettings": {
+ "formatters": [
+ {
+ "columnMatch": "properties",
+ "formatter": 5
+ }
+ ],
+ "filter": true,
+ "sortBy": [
+ {
+ "itemKey": "subscriptionId",
+ "sortOrder": 1
+ }
+ ]
+ },
+ "sortBy": [
+ {
+ "itemKey": "subscriptionId",
+ "sortOrder": 1
+ }
+ ]
+ },
+ "name": "query - 1"
+ },
+ {
+ "type": 1,
+ "content": {
+ "json": "## How this workbook works\r\n1. The parameters step declares a `site` resource parameter that is hidden in reading mode, but uses the same query as the grid. this parameter is marked `global`, and has no default selection. The parameters also declares a `properties` hidden text parameter. These parameters are [declared global](https://github.com/microsoft/Application-Insights-Workbooks/blob/master/Documentation/Parameters/Parameters.md#global-parameters) so they can be set by the grid below, but the toolbar can appear *above* the grid.\r\n\r\n2. The workbook has a links step, that renders as a toolbar. This toolbar has items to start a website, stop a website, and show the Azure Resourcve Graph properties for that website. The toolbar buttons all reference the `site` parameter, which by default has no selection, so the toolbar buttons are disabled. The start and stop buttons use the [ARM Action](https://github.com/microsoft/Application-Insights-Workbooks/blob/master/Documentation/Links/LinkActions.md#arm-action-settings) feature to run an action against the selected resource.\r\n\r\n3. The workbook has an Azure Resource Graph (ARG) query step, which queries for web sites in the selected subscriptions, and displays them in a grid. When a row is selected, the selected resource is exported to the `site` global parameter, causing start and stop buttons to become enabled. The ARG properties are also exported to the `properties` parameter, causing the poperties button to become enabled.",
+ "style": "info"
+ },
+ "conditionalVisibility": {
+ "parameterName": "debug",
+ "comparison": "isEqualTo",
+ "value": "true"
+ },
+ "name": "text - 3"
+ }
+ ],
+ "fallbackResourceIds": [
+ "Azure Monitor"
+ ],
+ "$schema": "https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json"
+}
+```
azure-monitor Workbooks Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-visualizations.md
# Workbook visualizations
-Workbooks provide a rich set of capabilities for visualizing Azure Monitor data. The exact set of capability depends on the data source and result set, but authors can expect them to converge over time. These controls allow authors to present their analysis in rich, interactive reports.
+Workbooks provide a rich set of capabilities for visualizing Azure Monitor data. The exact set of capabilities depends on the data sources and result sets, but authors can expect them to converge over time. These controls allow authors to present their analysis in rich, interactive reports.
Workbooks support these kinds of visual components: * [Text parameters](#text-parameters)
Workbooks support these kinds of visual components:
* [Text visualization](#text-visualizations) > [!NOTE]
-> Each visualization and data source may have its own [Limits](workbooks-limits.md).
+> Each visualization and data source may have its own [limits](workbooks-limits.md).
## Examples
Workbooks support these kinds of visual components:
:::image type="content" source="media/workbooks-visualizations/workbooks-text-visualization-example.png" alt-text="Example screenshot of an Azure workbooks text visualization."::: ## Next steps
+ - [Getting started with Azure Workbooks](workbooks-getting-started.md)
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
### General -- [Azure Monitor cost and usage](usage-estimated-costs.md) - Added standard web tests to table<br>Added explanation of billable GB calculation-- [Azure Monitor overview](overview.md) - Updated overview diagram
+| Article | Description |
+|:|:|
+| [Azure Monitor cost and usage](usage-estimated-costs.md) | Added standard web tests to table<br>Added explanation of billable GB calculation |
+| [Azure Monitor overview](overview.md) | Updated overview diagram |
### Agents -- [Azure Monitor agent extension versions](agents/azure-monitor-agent-extension-versions.md) - Update to latest extension version-- [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md) - Added supported resource types-- [Collect text and IIS logs with Azure Monitor agent (preview)](agents/data-collection-text-log.md) - Corrected error in data collection rule-- [Overview of the Azure monitoring agents](agents/agents-overview.md) - Added new OS supported for agent-- [Resource Manager template samples for agents](agents/resource-manager-agent.md) - Added Bicep examples-- [Resource Manager template samples for data collection rules](agents/resource-manager-data-collection-rules.md) - Fixed bug in sample parameter file-- [Rsyslog data not uploaded due to Full Disk space issue on AMA Linux Agent](agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) - New article-- [Troubleshoot the Azure Monitor agent on Linux virtual machines and scale sets](agents/azure-monitor-agent-troubleshoot-linux-vm.md) - New article-- [Troubleshoot the Azure Monitor agent on Windows Arc-enabled server](agents/azure-monitor-agent-troubleshoot-windows-arc.md) - New article-- [Troubleshoot the Azure Monitor agent on Windows virtual machines and scale sets](agents/azure-monitor-agent-troubleshoot-windows-vm.md) - New article
+| Article | Description |
+|:|:|
+| [Azure Monitor agent extension versions](agents/azure-monitor-agent-extension-versions.md) | Update to latest extension version |
+| [Azure Monitor agent overview](agents/azure-monitor-agent-overview.md) | Added supported resource types |
+| [Collect text and IIS logs with Azure Monitor agent (preview)](agents/data-collection-text-log.md) | Corrected error in data collection rule |
+| [Overview of the Azure monitoring agents](agents/agents-overview.md) | Added new OS supported for agent |
+| [Resource Manager template samples for agents](agents/resource-manager-agent.md) | Added Bicep examples |
+| [Resource Manager template samples for data collection rules](agents/resource-manager-data-collection-rules.md) | Fixed bug in sample parameter file |
+| [Rsyslog data not uploaded due to Full Disk space issue on AMA Linux Agent](agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) | New article |
+| [Troubleshoot the Azure Monitor agent on Linux virtual machines and scale sets](agents/azure-monitor-agent-troubleshoot-linux-vm.md) | New article |
+| [Troubleshoot the Azure Monitor agent on Windows Arc-enabled server](agents/azure-monitor-agent-troubleshoot-windows-arc.md) | New article |
+| [Troubleshoot the Azure Monitor agent on Windows virtual machines and scale sets](agents/azure-monitor-agent-troubleshoot-windows-vm.md) | New article |
### Alerts -- [IT Service Management Connector - Secure Webhook in Azure Monitor - Azure Configurations](alerts/itsm-connector-secure-webhook-connections-azure-configuration.md) - Added the workflow for ITSM management and removed all references to SCSM.-- [Overview of Azure Monitor Alerts](alerts/alerts-overview.md) - Complete rewrite-- [Resource Manager template samples for log query alerts](alerts/resource-manager-alerts-log.md) - Bicep samples for alerting have been added to the Resource Manager template samples articles.-- [Supported resources for metric alerts in Azure Monitor](alerts/alerts-metric-near-real-time.md) - Added a newly supported resource type
+| Article | Description |
+|:|:|
+| [IT Service Management Connector | Secure Webhook in Azure Monitor | Azure Configurations](alerts/itsm-connector-secure-webhook-connections-azure-configuration.md) | Added the workflow for ITSM management and removed all references to SCSM. |
+| [Overview of Azure Monitor Alerts](alerts/alerts-overview.md) | Complete rewrite |
+| [Resource Manager template samples for log query alerts](alerts/resource-manager-alerts-log.md) | Bicep samples for alerting have been added to the Resource Manager template samples articles. |
+| [Supported resources for metric alerts in Azure Monitor](alerts/alerts-metric-near-real-time.md) | Added a newly supported resource type. |
### Application Insights -- [Application Map in Azure Application Insights](app/app-map.md) - Application Maps Intelligent View feature-- [Azure Application Insights for ASP.NET Core applications](app/asp-net-core.md) - telemetry.Flush() guidance is now available.-- [Diagnose with Live Metrics Stream - Azure Application Insights](app/live-stream.md) - Updated information on using unsecure control channel.-- [Migrate an Azure Monitor Application Insights classic resource to a workspace-based resource](app/convert-classic-resource.md) - Schema change documentation is now available here.-- [Profile production apps in Azure with Application Insights Profiler](profiler/profiler-overview.md) - Profiler documentation now has a new home in the table of contents.-- All references to unsupported versions of .NET and .NET CORE have been scrubbed from Application Insights product documentation. See [.NET and >NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)
+| Article | Description |
+|:|:|
+| [Application Map in Azure Application Insights](app/app-map.md) | Application Maps Intelligent View feature |
+| [Azure Application Insights for ASP.NET Core applications](app/asp-net-core.md) | telemetry.Flush() guidance is now available. |
+| [Diagnose with Live Metrics Stream](app/live-stream.md) | Updated information on using unsecure control channel. |
+| [Migrate an Azure Monitor Application Insights classic resource to a workspace-based resource](app/convert-classic-resource.md) | Schema change documentation is now available here. |
+| [Profile production apps in Azure with Application Insights Profiler](profiler/profiler-overview.md) | Profiler documentation now has a new home in the table of contents. |
+
+All references to unsupported versions of .NET and .NET CORE have been scrubbed from Application Insights product documentation. See [.NET and >NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core)
### Change Analysis -- [Navigate to a change using custom filters in Change Analysis](change/change-analysis-custom-filters.md) - New article-- [Pin and share a Change Analysis query to the Azure dashboard](change/change-analysis-query.md) - New article-- [Use Change Analysis in Azure Monitor to find web-app issues](change/change-analysis.md) - Added details enabling for web app in-guest changes
+| Article | Description |
+|:|:|
+| [Navigate to a change using custom filters in Change Analysis](change/change-analysis-custom-filters.md) | New article |
+| [Pin and share a Change Analysis query to the Azure dashboard](change/change-analysis-query.md) | New article |
+| [Use Change Analysis in Azure Monitor to find web-app issues](change/change-analysis.md) | Added details enabling for web app in-guest changes |
### Containers -- [Configure ContainerLogv2 schema (preview) for Container Insights](containers/container-insights-logging-v2.md) - New article describing new schema for container logs-- [Enable Container insights](containers/container-insights-onboard.md) - General rewrite to improve clarity-- [Resource Manager template samples for Container insights](containers/resource-manager-container-insights.md) - Added Bicep examples
+| Article | Description |
+|:|:|
+| [Configure ContainerLogv2 schema (preview) for Container Insights](containers/container-insights-logging-v2.md) | New article describing new schema for container logs |
+| [Enable Container insights](containers/container-insights-onboard.md) | General rewrite to improve clarity |
+| [Resource Manager template samples for Container insights](containers/resource-manager-container-insights.md) | Added Bicep examples |
### Insights -- [Troubleshoot SQL Insights (preview)](insights/sql-insights-troubleshoot.md) - Added known issue for OS computer name.
+| Article | Description |
+|:|:|
+| [Troubleshoot SQL Insights (preview)](insights/sql-insights-troubleshoot.md) | Added known issue for OS computer name. |
### Logs -- [Azure Monitor customer-managed key](logs/customer-managed-keys.md) - Update limitations and constraint.-- [Design a Log Analytics workspace architecture](logs/workspace-design.md) - Complete rewrite to better describe decision criteria and include Sentinel considerations-- [Manage access to Log Analytics workspaces](logs/manage-access.md) - Consolidated and rewrote all content on configuring workspace access-- [Restore logs in Azure Monitor (Preview)](logs/restore.md) - Documented new Log Analytics table management configuration UI, which lets you configure a table's log plan and archive and retention policies.
+| Article | Description |
+|:|:|
+| [Azure Monitor customer-managed key](logs/customer-managed-keys.md) | Update limitations and constraint. |
+| [Design a Log Analytics workspace architecture](logs/workspace-design.md) | Complete rewrite to better describe decision criteria and include Sentinel considerations |
+| [Manage access to Log Analytics workspaces](logs/manage-access.md) | Consolidated and rewrote all content on configuring workspace access |
+| [Restore logs in Azure Monitor (Preview)](logs/restore.md) | Documented new Log Analytics table management configuration UI, which lets you configure a table's log plan and archive and retention policies. |
### Virtual Machines -- [Migrate from VM insights guest health (preview) to Azure Monitor log alerts](vm/vminsights-health-migrate.md) - New article describing process to replace VM guest health with alert rules-- [VM insights guest health (preview)](vm/vminsights-health-overview.md) - Added deprecation statement
+| Article | Description |
+|:|:|
+| [Migrate from VM insights guest health (preview) to Azure Monitor log alerts](vm/vminsights-health-migrate.md) | New article describing process to replace VM guest health with alert rules |
+| [VM insights guest health (preview)](vm/vminsights-health-overview.md) | Added deprecation statement |
azure-netapp-files Azacsnap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-get-started.md
Once these downloads are completed, then follow the steps in this guide to insta
### Verifying the download The installer has an associated PGP signature file with an `.asc` filename extension. This file can be used to ensure the installer downloaded is a verified
-Microsoft provided file. The Microsoft PGP Public Key used for signing Linux packages is available here (<https://packages.microsoft.com/keys/microsoft.asc>)
-and has been used to sign the signature file.
+Microsoft provided file. The [Microsoft PGP Public Key used for signing Linux packages](https://packages.microsoft.com/keys/microsoft.asc) has been used to sign the signature file.
The Microsoft PGP Public Key can be imported to a user's local as follows:
azure-netapp-files Azacsnap Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-tips.md
Explanation of the above crontab.
config files are. - `azacsnap -c .....` = the full azacsnap command to run, including all the options.
-Further explanation of cron and the format of the crontab file here: <https://en.wikipedia.org/wiki/Cron>
+For more information about cron and the format of the crontab file, see [cron](https://wikipedia.org/wiki/Cron).
> [!NOTE] > Users are responsible for monitoring the cron jobs to ensure snapshots are being
A storage volume snapshot can be restored to a new volume (`-c restore --restore
A snapshot can be copied back to the SAP HANA data area, but SAP HANA must not be running when a copy is made (`cp /hana/data/H80/mnt00001/.snapshot/hana_hourly.2020-06-17T113043.1586971Z/*`).
-For Azure Large Instance, you could contact the Microsoft operations team by opening a service request to restore a desired snapshot from the existing available snapshots. You can open a service request from Azure portal: <https://portal.azure.com>
+For Azure Large Instance, you could contact the Microsoft operations team by opening a service request to restore a desired snapshot from the existing available snapshots. You can open a service request via the [Azure portal](https://portal.azure.com).
If you decide to perform the disaster recovery failover, the `azacsnap -c restore --restore revertvolume` command at the DR site will automatically make available the most recent (`/hana/data` and `/hana/logbackups`) volume snapshots to allow for an SAP HANA recovery. Use this command with caution as it breaks replication between production and DR sites.
A 'boot' snapshot can be recovered as follows:
1. The customer will need to shut down the server. 1. After the Server is shut down, the customer will need to open a service request that contains the Machine ID and Snapshot to restore.
- > Customers can open a service request from the Azure portal: <https://portal.azure.com>
+ > Customers can open a service request via the [Azure portal](https://portal.azure.com).
1. Microsoft will restore the Operating System LUN using the specified Machine ID and Snapshot, and then boot the Server. 1. The customer will then need to confirm Server is booted and healthy.
azure-netapp-files Azure Netapp Files Resize Capacity Pools Or Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resize-capacity-pools-or-volumes.md
For information about monitoring a volumeΓÇÖs capacity, see [Monitor the capacit
## Resize the capacity pool using the Azure portal
-You can change the capacity pool size in 1-TiB increments or decrements. However, the capacity pool size cannot be smaller than 4 TiB. Resizing the capacity pool changes the purchased Azure NetApp Files capacity.
+You can change the capacity pool size in 1-TiB increments or decrements. However, the capacity pool size cannot be smaller than the sum of the capacity of the volumes hosted in the pool, with a minimum of 4TiB. Resizing the capacity pool changes the purchased Azure NetApp Files capacity.
1. From the NetApp Account view, go to **Capacity pools**, and click the capacity pool that you want to resize. 2. Right-click the capacity pool name or click the "…" icon at the end of the capacity pool row to display the context menu. Click **Resize**.
azure-netapp-files Performance Linux Nfs Read Ahead https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-nfs-read-ahead.md
To set a new value for read-ahead, run the following command:
### Example
-```
+```bash
#!/bin/bash # set | show readahead for a specific mount point # Useful for things like NFS and if you do not know / care about the backing device
To set a new value for read-ahead, run the following command:
# To the extent possible under law, Red Hat, Inc. has dedicated all copyright # to this software to the public domain worldwide, pursuant to the # CC0 Public Domain Dedication. This software is distributed without any warranty.
-# See <http://creativecommons.org/publicdomain/zero/1.0/>.
-#
+# For more information, see the [CC0 1.0 Public Domain Dedication](http://creativecommons.org/publicdomain/zero/1.0/).
E_BADARGS=22 function myusage() {
azure-percept Software Releases Usb Cable Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/software-releases-usb-cable-updates.md
This page provides information and download links for all the dev kit OS/firmwar
## Latest releases - **Latest service release**
-May Service Release (2205): [Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip](<https://download.microsoft.com/download/c/7/7/c7738a05-819c-48d9-8f30-e4bf64e19f11/Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip>)
+May Service Release (2205): [Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip](https://download.microsoft.com/download/c/7/7/c7738a05-819c-48d9-8f30-e4bf64e19f11/Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip)
- **Latest major update or known stable version** Feature Update (2104): [Azure-Percept-DK-1.0.20210409.2055.zip](https://download.microsoft.com/download/6/4/d/64d53e60-f702-432d-a446-007920a4612c/Azure-Percept-DK-1.0.20210409.2055.zip)
Feature Update (2104): [Azure-Percept-DK-1.0.20210409.2055.zip](https://download
|Release|Download Links|Note| |||::|
-|May Service Release (2205)|[Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip](<https://download.microsoft.com/download/c/7/7/c7738a05-819c-48d9-8f30-e4bf64e19f11/Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip>)||
-|March Service Release (2203)|[Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip](<https://download.microsoft.com/download/c/6/f/c6f6b152-699e-4f60-85b7-17b3ea57c189/Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip>)||
-|February Service Release (2202)|[Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip](<https://download.microsoft.com/download/f/8/6/f86ce7b3-8d76-494e-82d9-dcfb71fc2580/Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip>)||
-|January Service Release (2201)|[Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip](<https://download.microsoft.com/download/1/6/4/164cfcf2-ce52-4e75-9dee-63bb4a128e71/Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip>)||
-|November Service Release (2111)|[Azure-Percept-DK-1.0.20211124.1851-public_preview_1.0.zip](<https://download.microsoft.com/download/9/5/4/95464a73-109e-46c7-8624-251ceed0c5ea/Azure-Percept-DK-1.0.20211124.1851-public_preview_1.0.zip>)||
+|May Service Release (2205)|[Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip](https://download.microsoft.com/download/c/7/7/c7738a05-819c-48d9-8f30-e4bf64e19f11/Azure-Percept-DK-1.0.20220511.1756-public_preview_1.0.zip)||
+|March Service Release (2203)|[Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip](https://download.microsoft.com/download/c/6/f/c6f6b152-699e-4f60-85b7-17b3ea57c189/Azure-Percept-DK-1.0.20220310.1223-public_preview_1.0.zip)||
+|February Service Release (2202)|[Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip](https://download.microsoft.com/download/f/8/6/f86ce7b3-8d76-494e-82d9-dcfb71fc2580/Azure-Percept-DK-1.0.20220209.1156-public_preview_1.0.zip)||
+|January Service Release (2201)|[Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip](https://download.microsoft.com/download/1/6/4/164cfcf2-ce52-4e75-9dee-63bb4a128e71/Azure-Percept-DK-1.0.20220112.1519-public_preview_1.0.zip)||
+|November Service Release (2111)|[Azure-Percept-DK-1.0.20211124.1851-public_preview_1.0.zip](https://download.microsoft.com/download/9/5/4/95464a73-109e-46c7-8624-251ceed0c5ea/Azure-Percept-DK-1.0.20211124.1851-public_preview_1.0.zip)||
|September Service Release (2109)|[Azure-Percept-DK-1.0.20210929.1747-public_preview_1.0.zip](https://go.microsoft.com/fwlink/?linkid=2174462)|| |July Service Release (2107)|[Azure-Percept-DK-1.0.20210729.0957-public_preview_1.0.zip](https://download.microsoft.com/download/f/a/9/fa95d9d9-a739-493c-8fad-bccf839072c9/Azure-Percept-DK-1.0.20210729.0957-public_preview_1.0.zip)|| |June Service Release (2106)|[Azure-Percept-DK-1.0.20210611.0952-public_preview_1.0.zip](https://download.microsoft.com/download/1/5/8/1588f7e3-f8ae-4c06-baa2-c559364daae5/Azure-Percept-DK-1.0.20210611.0952-public_preview_1.0.zip)||
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
Title: Publish service catalog managed app description: Shows how to create an Azure managed application that is intended for members of your organization.-+ Previously updated : 08/16/2021- Last updated : 06/23/2022+ # Quickstart: Create and publish a managed application definition
This quickstart provides an introduction to working with [Azure Managed Applicat
To publish a managed application to your service catalog, you must:
-* Create a template that defines the resources to deploy with the managed application.
-* Define the user interface elements for the portal when deploying the managed application.
-* Create a _.zip_ package that contains the required template files.
-* Decide which user, group, or application needs access to the resource group in the user's subscription.
-* Create the managed application definition that points to the _.zip_ package and requests access for the identity.
+- Create a template that defines the resources to deploy with the managed application.
+- Define the user interface elements for the portal when deploying the managed application.
+- Create a _.zip_ package that contains the required template files.
+- Decide which user, group, or application needs access to the resource group in the user's subscription.
+- Create the managed application definition that points to the _.zip_ package and requests access for the identity.
## Create the ARM template
Add the following JSON to your file. It defines the parameters for creating a st
"resources": [ { "type": "Microsoft.Storage/storageAccounts",
- "apiVersion": "2019-06-01",
+ "apiVersion": "2021-09-01",
"name": "[variables('storageAccountName')]", "location": "[parameters('location')]", "sku": {
Add the following JSON to your file. It defines the parameters for creating a st
"outputs": { "storageEndpoint": { "type": "string",
- "value": "[reference(resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName')), '2019-06-01').primaryEndpoints.blob]"
+ "value": "[reference(resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName')), '2021-09-01').primaryEndpoints.blob]"
} } }
$groupID=(Get-AzADGroup -DisplayName mygroup).Id
# [Azure CLI](#tab/azure-cli) ```azurecli-interactive
-groupid=$(az ad group show --group mygroup --query objectId --output tsv)
+groupid=$(az ad group show --group mygroup --query id --output tsv)
```
Next, you need the role definition ID of the Azure built-in role you want to gra
# [PowerShell](#tab/azure-powershell) ```azurepowershell-interactive
-$ownerID=(Get-AzRoleDefinition -Name Owner).Id
+$roleid=(Get-AzRoleDefinition -Name Owner).Id
``` # [Azure CLI](#tab/azure-cli) ```azurecli-interactive
-ownerid=$(az role definition list --name Owner --query [].name --output tsv)
+roleid=$(az role definition list --name Owner --query [].name --output tsv)
```
New-AzManagedApplicationDefinition `
-LockLevel ReadOnly ` -DisplayName "Managed Storage Account" ` -Description "Managed Azure Storage Account" `
- -Authorization "${groupID}:$ownerID" `
+ -Authorization "${groupID}:$roleid" `
-PackageFileUri $blob.ICloudBlob.StorageUri.PrimaryUri.AbsoluteUri ```
az managedapp definition create \
--lock-level ReadOnly \ --display-name "Managed Storage Account" \ --description "Managed Azure Storage Account" \
- --authorizations "$groupid:$ownerid" \
+ --authorizations "$groupid:$roleid" \
--package-file-uri "$blob" ```
When the command completes, you have a managed application definition in your re
Some of the parameters used in the preceding example are:
-* **resource group**: The name of the resource group where the managed application definition is created.
-* **lock level**: The type of lock placed on the managed resource group. It prevents the customer from performing undesirable operations on this resource group. Currently, ReadOnly is the only supported lock level. When ReadOnly is specified, the customer can only read the resources present in the managed resource group. The publisher identities that are granted access to the managed resource group are exempt from the lock.
-* **authorizations**: Describes the principal ID and the role definition ID that are used to grant permission to the managed resource group. It's specified in the format of `<principalId>:<roleDefinitionId>`. If more than one value is needed, specify them in the form `<principalId1>:<roleDefinitionId1>,<principalId2>:<roleDefinitionId2>`. The values are separated by a comma.
-* **package file URI**: The location of a _.zip_ package that contains the required files.
+- **resource group**: The name of the resource group where the managed application definition is created.
+- **lock level**: The type of lock placed on the managed resource group. It prevents the customer from performing undesirable operations on this resource group. Currently, ReadOnly is the only supported lock level. When ReadOnly is specified, the customer can only read the resources present in the managed resource group. The publisher identities that are granted access to the managed resource group are exempt from the lock.
+- **authorizations**: Describes the principal ID and the role definition ID that are used to grant permission to the managed resource group.
+
+ - **Azure PowerShell**: `"${groupid}:$roleid"` or you can use curly braces for each variable `"${groupid}:${roleid}"`. Use a comma to separate multiple values: `"${groupid1}:$roleid1", "${groupid2}:$roleid2"`.
+ - **Azure CLI**: `"$groupid:$roleid"` or you can use curly braces as shown in PowerShell. Use a space to separate multiple values: `"$groupid1:$roleid1" "$groupid2:$roleid2"`.
+
+- **package file URI**: The location of a _.zip_ package that contains the required files.
## Bring your own storage for the managed application definition
-You can choose to store your managed application definition within a storage account provided by you during creation so that its location and access can be fully managed by you for your regulatory needs.
+As an alternative, you can choose to store your managed application definition within a storage account provided by you during creation so that its location and access can be fully managed by you for your regulatory needs.
> [!NOTE] > Bring your own storage is only supported with ARM template or REST API deployments of the managed application definition.
Use the following ARM template to deploy your packaged managed application as a
} ```
-We have added a new property named `storageAccountId` to your `applicationDefinitions` properties and provide storage account ID you wish to store your definition in as its value:
-
-You can verify that the application definition files are saved in your provided storage account in a container titled `applicationDefinitions`.
+The `applicationDefinitions` properties include `storageAccountId` that contains the storage account ID for your storage account. You can verify that the application definition files are saved in your provided storage account in a container titled `applicationDefinitions`.
> [!NOTE]
-> For added security, you can create a managed applications definition store it in an [Azure storage account blob where encryption is enabled](../../storage/common/storage-service-encryption.md). The definition contents are encrypted through the storage account's encryption options. Only users with permissions to the file can see the definition in Service Catalog.
+> For added security, you can create a managed applications definition and store it in an [Azure storage account blob where encryption is enabled](../../storage/common/storage-service-encryption.md). The definition contents are encrypted through the storage account's encryption options. Only users with permissions to the file can see the definition in Service Catalog.
## Make sure users can see your definition
azure-signalr Signalr Quickstart Azure Signalr Service Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-azure-signalr-service-arm-template.md
read -p "Press [ENTER] to continue: "
For a step-by-step tutorial that guides you through the process of creating an ARM template, see: > [!div class="nextstepaction"]
-> [ Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
+> [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
azure-signalr Signalr Quickstart Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-rest-api.md
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
## Sign in to Azure
-Sign in to the Azure portal at <https://portal.azure.com/> with your Azure account.
+Sign in to the [Azure portal](https://portal.azure.com) using your Azure account.
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.md) or [let us know](https://aka.ms/asrs/qsapi).
backup Backup Azure Recovery Services Vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-recovery-services-vault-overview.md
Title: Overview of Recovery Services vaults
description: An overview of Recovery Services vaults. Last updated 08/17/2020
+m
++ # Recovery Services vaults overview
-This article describes the features of a Recovery Services vault. A Recovery Services vault is a storage entity in Azure that houses data. The data is typically copies of data, or configuration information for virtual machines (VMs), workloads, servers, or workstations. You can use Recovery Services vaults to hold backup data for various Azure services such as IaaS VMs (Linux or Windows) and Azure SQL databases. Recovery Services vaults support System Center DPM, Windows Server, Azure Backup Server, and more. Recovery Services vaults make it easy to organize your backup data, while minimizing management overhead. Recovery Services vaults are based on the Azure Resource Manager model of Azure, which provides features such as:
+This article describes the features of a Recovery Services vault. A Recovery Services vault is a storage entity in Azure that houses data. The data is typically copies of data, or configuration information for virtual machines (VMs), workloads, servers, or workstations. You can use Recovery Services vaults to hold backup data for various Azure services such as IaaS VMs (Linux or Windows) and SQL Server in Azure VMs. Recovery Services vaults support System Center DPM, Windows Server, Azure Backup Server, and more. Recovery Services vaults make it easy to organize your backup data, while minimizing management overhead. Recovery Services vaults are based on the Azure Resource Manager model of Azure, which provides features such as:
- **Enhanced capabilities to help secure backup data**: With Recovery Services vaults, Azure Backup provides security capabilities to protect cloud backups. The security features ensure you can secure your backups, and safely recover data, even if production and backup servers are compromised. [Learn more](backup-azure-security-feature.md)
backup Backup Azure Sql Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-automation.md
The Recovery Services vault is a Resource Manager resource, so you must place it
3. Specify the type of redundancy to use for the vault storage. * You can use [locally redundant storage](../storage/common/storage-redundancy.md#locally-redundant-storage), [geo-redundant storage](../storage/common/storage-redundancy.md#geo-redundant-storage) or [zone-redundant storage](../storage/common/storage-redundancy.md#zone-redundant-storage) .
- * The following example sets the **-BackupStorageRedundancy** option for the[Set-AzRecoveryServicesBackupProperty](/powershell/module/az.recoveryservices/set-azrecoveryservicesbackupproperty) cmd for **testvault** set to **GeoRedundant**.
+ * The following example sets the **-BackupStorageRedundancy** option for the [Set-AzRecoveryServicesBackupProperty](/powershell/module/az.recoveryservices/set-azrecoveryservicesbackupproperty) cmd for **testvault** set to **GeoRedundant**.
```powershell $vault1 = Get-AzRecoveryServicesVault -Name "testvault"
backup Backup Vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-vault-overview.md
To create a Backup vault, follow these steps.
### Sign in to Azure
-Sign in to the Azure portal at <https://portal.azure.com>.
+Sign in to the [Azure portal](https://portal.azure.com).
### Create Backup vault
The vault move across subscriptions and resource groups is supported in all publ
:::image type="content" source="./media/backup-vault-overview/move-validation-process-to-move-to-resource-group-inline.png" alt-text="Screenshot showing the Backup vault validation status." lightbox="./media/backup-vault-overview/move-validation-process-to-move-to-resource-group-expanded.png":::
-1. Select the checkbox _I understand that tools and scripts associated with moved resources will not work until I update them to use new resource IDs_ΓÇÖ to confirm, and then select **Move**.
+1. Select the checkbox **I understand that tools and scripts associated with moved resources will not work until I update them to use new resource IDs** to confirm, and then select **Move**.
>[!Note] >The resource path changes after moving vault across resource groups or subscriptions. Ensure that you update the tools and scripts with the new resource path after the move operation completes.
Wait till the move operation is complete to perform any other operations on the
:::image type="content" source="./media/backup-vault-overview/move-validation-process-to-move-to-another-subscription-inline.png" alt-text="Screenshot showing the validation status of Backup vault to be moved to another Azure subscription." lightbox="./media/backup-vault-overview/move-validation-process-to-move-to-another-subscription-expanded.png":::
-1. Select the checkbox _I understand that tools and scripts associated with moved resources will not work until I update them to use new resource IDs_ to confirm, and then select **Move**.
+1. Select the checkbox **I understand that tools and scripts associated with moved resources will not work until I update them to use new resource IDs** to confirm, and then select **Move**.
>[!Note] >The resource path changes after moving vault across resource groups or subscriptions. Ensure that you update the tools and scripts with the new resource path after the move operation completes.
backup Manage Azure File Share Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-azure-file-share-rest-api.md
For example: To change the protection policy of *testshare* from *schedule1* to
## Stop protection but retain existing data
-You can remove protection on a protected file share but retain the data already backed up. To do so, remove the policy in the request body you used to[enable backup](backup-azure-file-share-rest-api.md#enable-backup-for-the-file-share) and submit the request. Once the association with the policy is removed, backups are no longer triggered, and no new recovery points are created.
+You can remove protection on a protected file share but retain the data already backed up. To do so, remove the policy in the request body you used to [enable backup](backup-azure-file-share-rest-api.md#enable-backup-for-the-file-share) and submit the request. Once the association with the policy is removed, backups are no longer triggered, and no new recovery points are created.
```json {
backup Transport Layer Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/transport-layer-security.md
If the machine is running earlier versions of Windows, the corresponding updates
|Operating system |KB article | |||
-|Windows Server 2008 SP2 | <https://support.microsoft.com/help/4019276> |
-|Windows Server 2008 R2, Windows 7, Windows Server 2012 | <https://support.microsoft.com/help/3140245> |
+|Windows Server 2008 SP2 | <https://support.microsoft.com/help/4019276> |
+|Windows Server 2008 R2, Windows 7, Windows Server 2012 | <https://support.microsoft.com/help/3140245> |
>[!NOTE] >The update will install the required protocol components. After installation, you must make the registry key changes mentioned in the KB articles above to properly enable the required protocols.
backup Tutorial Backup Windows Server To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-backup-windows-server-to-azure.md
You can use Azure Backup to protect your Windows Server from corruptions, attack
## Sign in to Azure
-Sign in to the Azure portal at <https://portal.azure.com>.
+Sign in to the [Azure portal](https://portal.azure.com).
## Create a Recovery Services vault
batch Batch Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-customer-managed-key.md
az batch account set \
## Next steps - Learn more about [security best practices in Azure Batch](security-best-practices.md).-- Learn more about[Azure Key Vault](../key-vault/general/basic-concepts.md).
+- Learn more about [Azure Key Vault](../key-vault/general/basic-concepts.md).
batch Batch Pool Compute Intensive Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-compute-intensive-sizes.md
To run CUDA applications on a pool of Windows NC nodes, you need to install NVDI
To run CUDA applications on a pool of Linux NC nodes, you need to install necessary NVIDIA Tesla GPU drivers from the CUDA Toolkit. The following sample steps create and deploy a custom Ubuntu 16.04 LTS image with the GPU drivers:
-1. Deploy an Azure NC-series VM running Ubuntu 16.04 LTS. For example, create the VM in the US South Central region.
-2. Add the [NVIDIA GPU Drivers extension](../virtual-machines/extensions/hpccompute-gpu-linux.md
-) to the VM by using the Azure portal, a client computer that connects to the Azure subscription, or Azure Cloud Shell. Alternatively, follow the steps to connect to the VM and [install CUDA drivers](../virtual-machines/linux/n-series-driver-setup.md) manually.
+1. Deploy an Azure NC-series VM running Ubuntu 16.04 LTS. For example, create the VM in the US South Central region.
+2. Add the [NVIDIA GPU Drivers extension](../virtual-machines/extensions/hpccompute-gpu-linux.md) to the VM by using the Azure portal, a client computer that connects to the Azure subscription, or Azure Cloud Shell. Alternatively, follow the steps to connect to the VM and [install CUDA drivers](../virtual-machines/linux/n-series-driver-setup.md) manually.
3. Follow the steps to create an [Azure Compute Gallery image](batch-sig-images.md) for Batch. 4. Create a Batch account in a region that supports NC VMs. 5. Using the Batch APIs or Azure portal, create a pool [using the custom image](batch-sig-images.md) and with the desired number of nodes and scale. The following table shows sample pool settings for the image:
cloud-services Cloud Services Nodejs Chat App Socketio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-nodejs-chat-app-socketio.md
Azure emulator:
> > Reinstall AzureAuthoringTools v 2.7.1 and AzureComputeEmulator v 2.7 - make sure that version matches.
-2. Open a browser and navigate to **http://127.0.0.1**.
+2. Open a browser and navigate to `http://127.0.0.1`.
3. When the browser window opens, enter a nickname and then hit enter. This will allow you to post messages as a specific nickname. To test multi-user functionality, open additional browser windows using the
cognitive-services Get Suggested Search Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/get-suggested-search-terms.md
Familiarize yourself with the [Bing Autosuggest API v7](/rest/api/cognitiveservi
Visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore the other available APIs.
-Learn how to search the web by using the [Bing Web Search API](../bing-web-search/overview.md), and explore the other[Bing Search APIs](../bing-web-search/index.yml).
+Learn how to search the web by using the [Bing Web Search API](../bing-web-search/overview.md), and explore the other [Bing Search APIs](../bing-web-search/index.yml).
Be sure to read [Bing Use and Display Requirements](../bing-web-search/use-display-requirements.md) so you don't break any of the rules about using the search results.
cognitive-services Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Content-Moderator/samples-dotnet.md
ms.devlang: csharp
The following list includes links to the code samples built using the Azure Content Moderator SDK for .NET. - **Image moderation**: [Evaluate an image for adult and racy content, text, and faces](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageModeration/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).-- **Custom images**: [Moderate with custom image lists](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageListManagement/Program.cs). See the[.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).
+- **Custom images**: [Moderate with custom image lists](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/ImageListManagement/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).
> [!NOTE] > There is a maximum limit of **5 image lists** with each list to **not exceed 10,000 images**. > -- **Text moderation**: [Screen text for profanity and personal data](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/TextModeration/Program.cs). See the[.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).
+- **Text moderation**: [Screen text for profanity and personal data](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/TextModeration/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp).
- **Custom terms**: [Moderate with custom term lists](https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/ContentModerator/TermListManagement/Program.cs). See the [.NET SDK quickstart](./client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp). > [!NOTE]
cognitive-services Export Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/export-programmatically.md
export = trainer.export_iteration(project_id, iteration_id, platform, flavor, ra
For more information, see the **[export_iteration](/python/api/azure-cognitiveservices-vision-customvision/azure.cognitiveservices.vision.customvision.training.operations.customvisiontrainingclientoperationsmixin#export-iteration-project-id--iteration-id--platform--flavor-none--custom-headers-none--raw-false-operation-config-)** method.
+> [!IMPORTANT]
+> If you've already exported a particular iteration, you cannot call the **export_iteration** method again. Instead, skip ahead to the **get_exports** method call to get a link to your existing exported model.
+ ## Download the exported model Next, you'll call the **get_exports** method to check the status of the export operation. The operation runs asynchronously, so you should poll this method until the operation completes. When it completes, you can retrieve the URI where you can download the model iteration to your device.
cognitive-services Get Started Build Detector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/get-started-build-detector.md
The training process should only take a few minutes. During this time, informati
## Evaluate the detector
-After training has completed, the model's performance is calculated and displayed. The Custom Vision service uses the images that you submitted for training to calculate precision, recall, and mean average precision, using a process called [k-fold cross validation](https://wikipedia.org/wiki/Cross-validation_(statistics)). Precision and recall are two different measurements of the effectiveness of a detector:
+After training has completed, the model's performance is calculated and displayed. The Custom Vision service uses the images that you submitted for training to calculate precision, recall, and mean average precision. Precision and recall are two different measurements of the effectiveness of a detector:
- **Precision** indicates the fraction of identified classifications that were correct. For example, if the model identified 100 images as dogs, and 99 of them were actually of dogs, then the precision would be 99%. - **Recall** indicates the fraction of actual classifications that were correctly identified. For example, if there were actually 100 images of apples, and the model identified 80 as apples, the recall would be 80%.
cognitive-services Getting Started Build A Classifier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Custom-Vision-Service/getting-started-build-a-classifier.md
The training process should only take a few minutes. During this time, informati
## Evaluate the classifier
-After training has completed, the model's performance is estimated and displayed. The Custom Vision Service uses the images that you submitted for training to calculate precision and recall, using a process called [k-fold cross validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)). Precision and recall are two different measurements of the effectiveness of a classifier:
+After training has completed, the model's performance is estimated and displayed. The Custom Vision Service uses the images that you submitted for training to calculate precision and recall. Precision and recall are two different measurements of the effectiveness of a classifier:
- **Precision** indicates the fraction of identified classifications that were correct. For example, if the model identified 100 images as dogs, and 99 of them were actually of dogs, then the precision would be 99%. - **Recall** indicates the fraction of actual classifications that were correctly identified. For example, if there were actually 100 images of apples, and the model identified 80 as apples, the recall would be 80%.
cognitive-services Luis Concept Devops Sourcecontrol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-concept-devops-sourcecontrol.md
To save a LUIS app in `.lu` format and place it under source control:
### Build the LUIS app from source
-For a LUIS app, to *build from source* means to [create a new LUIS app version by importing the `.lu` source](./luis-how-to-manage-versions.md#import-version) , to [train the version](./luis-how-to-train.md) and to[publish it](./luis-how-to-publish-app.md). You can do this in the LUIS portal, or at the command line:
+For a LUIS app, to *build from source* means to [create a new LUIS app version by importing the `.lu` source](./luis-how-to-manage-versions.md#import-version) , to [train the version](./luis-how-to-train.md) and to [publish it](./luis-how-to-publish-app.md). You can do this in the LUIS portal, or at the command line:
- Use the LUIS portal to [import the `.lu` version](./luis-how-to-manage-versions.md#import-version) of the app from source control, and [train](./luis-how-to-train.md) and [publish](./luis-how-to-publish-app.md) the app.
cognitive-services Translator How To Install Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/containers/translator-how-to-install-container.md
All Cognitive Services containers require three primary elements:
* **EULA accept setting**. An end-user license agreement (EULA) set with a value of `Eula=accept`.
-* **API key** and **Endpoint URL**. The API key is used to start the container. You can retrieve the API key and Endpoint URL values by navigating to the Translator resource _Keys and Endpoint_ page and selecting the `Copy to clipboard` <span class="docon docon-edit-copy x-hidden-focus"></span> icon.
+* **API key** and **Endpoint URL**. The API key is used to start the container. You can retrieve the API key and Endpoint URL values by navigating to the Translator resource **Keys and Endpoint** page and selecting the `Copy to clipboard` <span class="docon docon-edit-copy x-hidden-focus"></span> icon.
> [!IMPORTANT] >
There are several ways to validate that the container is running:
#### English &leftrightarrow; German
-Navigate to the swagger page: `<http://localhost:5000/swagger/https://docsupdatetracker.net/index.html>`
+Navigate to the swagger page: `http://localhost:5000/swagger/https://docsupdatetracker.net/index.html`
1. Select **POST /translate** 1. Select **Try it out**
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/faq.md
files.
## I tried uploading my TMX, but it says "document processing failed" -
-Ensure that the TMX conforms to the TMX 1.4b Specification at
-<https://www.gala-global.org/tmx-14b>.
+Ensure that the TMX conforms to the [TMX 1.4b Specification](https://www.gala-global.org/tmx-14b).
cognitive-services Get Started With Document Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/get-started-with-document-translation.md
Previously updated : 02/02/2022 Last updated : 06/23/2022 recommendations: false ms.devlang: csharp, golang, java, javascript, python
The following headers are included with each Document Translator API request:
* The POST request URL is POST `https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.com/translator/text/batch/v1.0/batches` * The POST request body is a JSON object named `inputs`.
-* The `inputs` object contains both `sourceURL` and `targetURL` container addresses for your source and target language pairs
-* The `prefix` and `suffix` fields (optional) are used to filter documents in the container including folders.
+* The `inputs` object contains both `sourceURL` and `targetURL` container addresses for your source and target language pairs.
+* The `prefix` and `suffix` are case-sensitive strings to filter documents in the source path for translation. The `prefix` field is often used to delineate subfolders for translation. The `suffix` field is most often used for file extensions.
* A value for the `glossaries` field (optional) is applied when the document is being translated. * The `targetUrl` for each target language must be unique.
The following headers are included with each Document Translator API request:
{ "source": { "sourceUrl": "https://myblob.blob.core.windows.net/source",
- "filter": {
- "prefix": "myfolder/"
- }
- },
+ },
"targets": [ { "targetUrl": "https://myblob.blob.core.windows.net/target",
Operation-Location | https://<<span>NAME-OF-YOUR-RESOURCE>.cognitiveservices.a
private static readonly string key = "<YOUR-KEY>";
- static readonly string json = ("{\"inputs\": [{\"source\": {\"sourceUrl\": \"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS\",\"storageSource\": \"AzureBlob\",\"language\": \"en\",\"filter\":{\"prefix\": \"Demo_1/\"} }, \"targets\": [{\"targetUrl\": \"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS\",\"storageSource\": \"AzureBlob\",\"category\": \"general\",\"language\": \"es\"}]}]}");
+ static readonly string json = ("{\"inputs\": [{\"source\": {\"sourceUrl\": \"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS\",\"storageSource\": \"AzureBlob\",\"language\": \"en\"}, \"targets\": [{\"targetUrl\": \"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS\",\"storageSource\": \"AzureBlob\",\"category\": \"general\",\"language\": \"es\"}]}]}");
static async Task Main(string[] args) {
let data = JSON.stringify({"inputs": [
"source": { "sourceUrl": "https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS", "storageSource": "AzureBlob",
- "language": "en",
- "filter":{
- "prefix": "Demo_1/"
+ "language": "en"
} }, "targets": [
payload= {
"source": { "sourceUrl": "https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS", "storageSource": "AzureBlob",
- "language": "en",
- "filter":{
- "prefix": "Demo_1/"
+ "language": "en"
} }, "targets": [
public class DocumentTranslation {
public void post() throws IOException { MediaType mediaType = MediaType.parse("application/json");
- RequestBody body = RequestBody.create(mediaType, "{\n \"inputs\": [\n {\n \"source\": {\n \"sourceUrl\": \"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS\",\n \"filter\": {\n \"prefix\": \"Demo_1\"\n },\n \"language\": \"en\",\n \"storageSource\": \"AzureBlob\"\n },\n \"targets\": [\n {\n \"targetUrl\": \"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS\",\n \"category\": \"general\",\n\"language\": \"fr\",\n\"storageSource\": \"AzureBlob\"\n }\n ],\n \"storageType\": \"Folder\"\n }\n ]\n}");
+ RequestBody body = RequestBody.create(mediaType, "{\n \"inputs\": [\n {\n \"source\": {\n \"sourceUrl\": \"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS\",\n },\n \"language\": \"en\",\n \"storageSource\": \"AzureBlob\"\n },\n \"targets\": [\n {\n \"targetUrl\": \"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS\",\n \"category\": \"general\",\n\"language\": \"fr\",\n\"storageSource\": \"AzureBlob\"\n }\n ],\n \"storageType\": \"Folder\"\n }\n ]\n}");
Request request = new Request.Builder() .url(path).post(body) .addHeader("Ocp-Apim-Subscription-Key", key)
key := "<YOUR-KEY>"
uri := endpoint + "/batches" method := "POST"
-var jsonStr = []byte(`{"inputs":[{"source":{"sourceUrl":"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS","storageSource":"AzureBlob","language":"en","filter":{"prefix":"Demo_1/"}},"targets":[{"targetUrl":"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS","storageSource":"AzureBlob","category":"general","language":"es"}]}]}`)
+var jsonStr = []byte(`{"inputs":[{"source":{"sourceUrl":"https://YOUR-SOURCE-URL-WITH-READ-LIST-ACCESS-SAS","storageSource":"AzureBlob","language":"en"},"targets":[{"targetUrl":"https://YOUR-TARGET-URL-WITH-WRITE-LIST-ACCESS-SAS","storageSource":"AzureBlob","category":"general","language":"es"}]}]}`)
req, err := http.NewRequest(method, endpoint, bytes.NewBuffer(jsonStr)) req.Header.Add("Ocp-Apim-Subscription-Key", key)
cognitive-services Previous Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/previous-updates.md
+
+ Title: Previous language service updates
+
+description: An archive of previous Azure Cognitive Service for Language updates.
++++++ Last updated : 06/23/2022++++
+# Previous updates for Azure Cognitive Service for Language
+
+This article contains a list of previously recorded updates for Azure Cognitive Service for Language. For more current service updates, see [What's new](../whats-new.md).
+
+## October 2021
+
+* Quality improvements for the extractive summarization feature in model-version `2021-08-01`.
+
+## September 2021
+
+* Starting with version `3.0.017010001-onprem-amd64` The text analytics for health container can now be called using the client library.
+
+## July 2021
+
+* General availability for text analytics for health containers and API.
+* General availability for opinion mining.
+* General availability for PII extraction and redaction.
+* General availability for asynchronous operation.
+
+## June 2021
+
+### General API updates
+
+* New model-version `2021-06-01` for key phrase extraction based on transformers. It provides:
+ * Support for 10 languages (Latin and CJK).
+ * Improved key phrase extraction.
+* The `2021-06-01` model version for Named Entity Recognition (NER) which provides
+ * Improved AI quality and expanded language support for the *Skill* entity category.
+ * Added Spanish, French, German, Italian and Portuguese language support for the *Skill* entity category
+
+### Text Analytics for health updates
+
+* A new model version `2021-05-15` for the `/health` endpoint and on-premise container which provides
+ * 5 new entity types: `ALLERGEN`, `CONDITION_SCALE`, `COURSE`, `EXPRESSION` and `MUTATION_TYPE`,
+ * 14 new relation types,
+ * Assertion detection expanded for new entity types and
+ * Linking support for `ALLERGEN` entity type
+* A new image for the Text Analytics for health container with tag `3.0.016230002-onprem-amd64` and model version `2021-05-15`. This container is available for download from Microsoft Container Registry.
+
+## May 2021
+
+* Custom question answering (previously QnA maker) can now be accessed using a Text Analytics resource.
+* Preview API release, including:
+ * Asynchronous API now supports sentiment analysis and opinion mining.
+ * A new query parameter, `LoggingOptOut`, is now available for customers who wish to opt out of logging input text for incident reports.
+* Text analytics for health and asynchronous operations are now available in all regions.
+
+## March 2021
+
+* Changes in the opinion mining JSON response body:
+ * `aspects` is now `targets` and `opinions` is now `assessments`.
+* Changes in the JSON response body of the hosted web API of text analytics for health:
+ * The `isNegated` boolean name of a detected entity object for negation is deprecated and replaced by assertion detection.
+ * A new property called `role` is now part of the extracted relation between an attribute and an entity as well as the relation between entities. This adds specificity to the detected relation type.
+* Entity linking is now available as an asynchronous task.
+* A new `pii-categories` parameter for the PII feature.
+ * This parameter lets you specify select PII entities, as well as those not supported by default for the input language.
+* Updated client libraries, which include asynchronous and text analytics for health operations.
+
+* A new model version `2021-03-01` for text analytics for health API and on-premise container which provides:
+ * A rename of the `Gene` entity type to `GeneOrProtein`.
+ * A new `Date` entity type.
+ * Assertion detection which replaces negation detection.
+ * A new preferred `name` property for linked entities that is normalized from various ontologies and coding systems.
+* A new text analytics for health container image with tag `3.0.015490002-onprem-amd64` and the new model-version `2021-03-01` has been released to the container preview repository.
+ * This container image will no longer be available for download from `containerpreview.azurecr.io` after April 26th, 2021.
+* **Processed Text Records** is now available as a metric in the **Monitoring** section for your text analytics resource in the Azure portal.
+
+## February 2021
+
+* The `2021-01-15` model version for the PII feature, which provides:
+ * Expanded support for 9 new languages
+ * Improved AI quality
+* The S0 through S4 pricing tiers are being retired on March 8th, 2021.
+* The language detection container is now generally available.
+
+## January 2021
+
+* The `2021-01-15` model version for Named Entity Recognition (NER), which provides
+ * Expanded language support.
+ * Improved AI quality of general entity categories for all supported languages.
+* The `2021-01-05` model version for language detection, which provides additional language support.
+
+## November 2020
+
+* Portuguese (Brazil) `pt-BR` is now supported in sentiment analysis, starting with model version `2020-04-01`. It adds to the existing `pt-PT` support for Portuguese.
+* Updated client libraries, which include asynchronous and text analytics for health operations.
+
+## October 2020
+
+* Hindi support for sentiment analysis, starting with model version `2020-04-01`.
+* Model version `2020-09-01` for language detection, which adds additional language support and accuracy improvements.
+
+## September 2020
+
+* PII now includes the new `redactedText` property in the response JSON where detected PII entities in the input text are replaced by an `*` for each character of those entities.
+* Entity linking endpoint now includes the `bingID` property in the response JSON for linked entities.
+* The following updates are specific to the September release of the text analytics for health container only.
+ * A new container image with tag `1.1.013530001-amd64-preview` with the new model-version `2020-09-03` has been released to the container preview repository.
+ * This model version provides improvements in entity recognition, abbreviation detection, and latency enhancements.
+
+## August 2020
+
+* Model version `2020-07-01` for key phrase extraction, PII detection, and language detection. This update adds:
+ * Additional government and country specific entity categories for Named Entity Recognition.
+ * Norwegian and Turkish support in Sentiment Analysis.
+* An HTTP 400 error will now be returned for API requests that exceed the published data limits.
+* Endpoints that return an offset now support the optional `stringIndexType` parameter, which adjusts the returned `offset` and `length` values to match a supported string index scheme.
+
+The following updates are specific to the August release of the Text Analytics for health container only.
+
+* New model-version for Text Analytics for health: `2020-07-24`
+
+The following properties in the JSON response have changed:
+
+* `type` has been renamed to `category`
+* `score` has been renamed to `confidenceScore`
+* Entities in the `category` field of the JSON output are now in pascal case. The following entities have been renamed:
+ * `EXAMINATION_RELATION` has been renamed to `RelationalOperator`.
+ * `EXAMINATION_UNIT` has been renamed to `MeasurementUnit`.
+ * `EXAMINATION_VALUE` has been renamed to `MeasurementValue`.
+ * `ROUTE_OR_MODE` has been renamed `MedicationRoute`.
+ * The relational entity `ROUTE_OR_MODE_OF_MEDICATION` has been renamed to `RouteOfMedication`.
+
+The following entities have been added:
+
+* Named Entity Recognition
+ * `AdministrativeEvent`
+ * `CareEnvironment`
+ * `HealthcareProfession`
+ * `MedicationForm`
+
+* Relation extraction
+ * `DirectionOfCondition`
+ * `DirectionOfExamination`
+ * `DirectionOfTreatment`
+
+## May 2020
+
+* Model version `2020-04-01`:
+ * Updated language support for sentiment analysis
+ * New "Address" entity category in Named Entity Recognition (NER)
+ * New subcategories in NER:
+ * Location - Geographical
+ * Location - Structural
+ * Organization - Stock Exchange
+ * Organization - Medical
+ * Organization - Sports
+ * Event - Cultural
+ * Event - Natural
+ * Event - Sports
+
+* The following properties in the JSON response have been added:
+ * `SentenceText` in sentiment analysis
+ * `Warnings` for each document
+
+* The names of the following properties in the JSON response have been changed, where applicable:
+
+* `score` has been renamed to `confidenceScore`
+ * `confidenceScore` has two decimal points of precision.
+* `type` has been renamed to `category`
+* `subtype` has been renamed to `subcategory`
+
+* New sentiment analysis feature - opinion mining
+* New personal (`PII`) domain filter for protected health information (`PHI`).
+
+## February 2020
+
+Additional entity types are now available in the Named Entity Recognition (NER). This update introduces model version `2020-02-01`, which includes:
+
+* Recognition of the following general entity types (English only):
+ * PersonType
+ * Product
+ * Event
+ * Geopolitical Entity (GPE) as a subtype under Location
+ * Skill
+
+* Recognition of the following personal information entity types (English only):
+ * Person
+ * Organization
+ * Age as a subtype under Quantity
+ * Date as a subtype under DateTime
+ * Email
+ * Phone Number (US only)
+ * URL
+ * IP Address
+
+### October 2019
+
+* Introduction of PII feature
+* Model version `2019-10-01`, which includes:
+ * Named entity recognition:
+ * Expanded detection and categorization of entities found in text.
+ * Recognition of the following new entity types:
+ * Phone number
+ * IP address
+ * Sentiment analysis:
+ * Significant improvements in the accuracy and detail of the API's text categorization and scoring.
+ * Automatic labeling for different sentiments in text.
+ * Sentiment analysis and output on a document and sentence level.
+
+ This model version supports: English (`en`), Japanese (`ja`), Chinese Simplified (`zh-Hans`), Chinese Traditional (`zh-Hant`), French (`fr`), Italian (`it`), Spanish (`es`), Dutch (`nl`), Portuguese (`pt`), and German (`de`).
+
+## Next steps
+
+See [What's new](../whats-new.md) for current service updates.
cognitive-services Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/concepts/data-formats.md
CLU offers the option to upload your utterance directly to the project rather th
"intent": "{intent}", "entities": [ {
- "entityName": "{entity}",
+ "category": "{entity}",
"offset": 19, "length": 10 }
CLU offers the option to upload your utterance directly to the project rather th
"intent": "{intent}", "entities": [ {
- "entityName": "{entity}",
+ "category": "{entity}",
"offset": 20, "length": 10 }, {
- "entityName": "{entity}",
+ "category": "{entity}",
"offset": 31, "length": 5 }
cognitive-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-text-classification/glossary.md
Use this article to learn about some of the definitions and terms you may encoun
A class is a user-defined category that indicates the overall classification of the text. Developers label their data with their classes before they pass it to the model for training. ## F1 score
-The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall].
+The F1 score is a function of Precision and Recall. It's needed when you seek a balance between [precision](#precision) and [recall](#recall).
## Model
cognitive-services Active Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/tutorials/active-learning.md
Once the import of the test file is complete, active learning suggestions can be
> [!div class="mx-imgBorder"] > [ ![Screenshot with review suggestions page displayed.]( ../media/active-learning/review-suggestions.png) ]( ../media/active-learning/review-suggestions.png#lightbox)
+> [!NOTE]
+> Active learning suggestions are not real time. There is an approximate delay of 30 minutes before the suggestions can show on this pane. This delay is to ensure that we balance the high cost involved for real time updates to the index and service performance.
+ We can now either accept these suggestions or reject them using the options on the menu bar to **Accept all suggestions** or **Reject all suggestions**. Alternatively, to accept or reject individual suggestions, select the checkmark (accept) symbol or trash can (reject) symbol that appears next to individual questions in the **Review suggestions** page.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-
## Next steps
-* [What is Azure Cognitive Service for Language?](overview.md)
+* See the [previous updates](./concepts/previous-updates.md) article for service updates not listed here.
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/encrypt-data-at-rest.md
Title: Personalizer service encryption of data at rest
+ Title: Data-at-rest encryption in Personalizer
-description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Personalizer, and how to enable and manage CMK.
+description: Learn about the keys that you use for data-at-rest encryption in Personalizer. See how to use Azure Key Vault to configure customer-managed keys.
Previously updated : 08/28/2020 Last updated : 06/02/2022 + #Customer intent: As a user of the Personalizer service, I want to learn how encryption at rest works.
-# Personalizer service encryption of data at rest
+# Encryption of data at rest in Personalizer
-The Personalizer service automatically encrypts your data when persisted it to the cloud. The Personalizer service encryption protects your data and to help you to meet your organizational security and compliance commitments.
+Personalizer is a service in Azure Cognitive Services that uses a machine learning model to provide apps with user-tailored content. When Personalizer persists data to the cloud, it encrypts that data. This encryption protects your data and helps you meet organizational security and compliance commitments.
[!INCLUDE [cognitive-services-about-encryption](../includes/cognitive-services-about-encryption.md)] > [!IMPORTANT]
-> Customer-managed keys are only available on the E0 pricing tier. To request the ability to use customer-managed keys, fill out and submit the [Personalizer Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using CMK with the Personalizer service, you will need to create a new Personalizer resource and select E0 as the Pricing Tier. Once your Personalizer resource with the E0 pricing tier is created, you can use Azure Key Vault to set up your managed identity.
+> Customer-managed keys are only available with the E0 pricing tier. To request the ability to use customer-managed keys, fill out and submit the [Personalizer Service Customer-Managed Key Request Form](https://aka.ms/cogsvc-cmk). It takes approximately 3-5 business days to hear back about the status of your request. If demand is high, you might be placed in a queue and approved when space becomes available.
+>
+> After you're approved to use customer-managed keys with Personalizer, create a new Personalizer resource and select E0 as the pricing tier. After you've created that resource, you can use Azure Key Vault to set up your managed identity.
[!INCLUDE [cognitive-services-cmk](../includes/configure-customer-managed-keys.md)]
cognitive-services Responsible Data And Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-data-and-privacy.md
Also see:
- [See Responsible use guidelines for Personalizer](responsible-use-cases.md).
-To learn more about Microsoft's privacy and security commitments, see the[Microsoft Trust Center](https://www.microsoft.com/trust-center).
+To learn more about Microsoft's privacy and security commitments, see the [Microsoft Trust Center](https://www.microsoft.com/trust-center).
communication-services Events Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/events-playbook.md
The goal of this document is to reduce the time it takes for Event Management Pl
## What are virtual events and event management platforms?
-Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](/microsoftteams/quick-start-meetings-live-events), [Graph](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta) and [Azure Communication Services](../overview.md). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about[ Teams Meetings, Webinars and Live Events](/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios.
+Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](/microsoftteams/quick-start-meetings-live-events), [Graph](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta) and [Azure Communication Services](../overview.md). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about [Teams Meetings, Webinars and Live Events](/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios.
## What are the building blocks of an event management platform?
confidential-computing Use Cases Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/use-cases-scenarios.md
Confidential computing can expand the number of workloads eligible for public cl
### BYOK (Bring Your Own Key) scenarios
-The adoption of hardware secure modules (HSM) enables secure transfer of keys and certificates to a protected cloud storage - [Azure Key Vault Managed HSM](\..\key-vault\managed-hsm\overview.md) ΓÇô without allowing the cloud service provider to access such sensitive information. Secrets being transferred never exist outside an HSM in plaintext form, enabling scenarios for sovereignty of keys and certificates that are client generated and managed, but still using a cloud-based secure storage.
+The adoption of hardware secure modules (HSM) enables secure transfer of keys and certificates to a protected cloud storage - [Azure Key Vault Managed HSM](../key-vault/managed-hsm/overview.md) ΓÇô without allowing the cloud service provider to access such sensitive information. Secrets being transferred never exist outside an HSM in plaintext form, enabling scenarios for sovereignty of keys and certificates that are client generated and managed, but still using a cloud-based secure storage.
## Secure blockchain
container-apps Authentication Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-github.md
To complete the procedure in this article, you need a GitHub account. To create
1. If you're configuring the first identity provider for this application, you'll also be prompted with a **Container Apps authentication settings** section. Otherwise, you may move on to the next step.
- These options determine how your application responds to unauthenticated requests. The default selections redirect all requests to sign in with this new provider. You can change customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow]([Authentication flow](authentication.md#authentication-flow)).
+ These options determine how your application responds to unauthenticated requests. The default selections redirect all requests to sign in with this new provider. You can change customize this behavior now or adjust these settings later from the main **Authentication** screen by choosing **Edit** next to **Authentication settings**. To learn more about these options, see [Authentication flow](./authentication.md#authentication-flow).
1. Select **Add**.
container-apps Background Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/background-processing.md
Create a file named *queue.json* and paste the following configuration code into
}, { "name": "QueueConnectionString",
- "secretref": "queueconnection"
+ "secretRef": "queueconnection"
} ] }
container-apps Manage Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/manage-secrets.md
Here, a connection string to a queue storage account is declared in the `--secre
## Using secrets
-Application secrets are referenced via the `secretref` property. Secret values are mapped to application-level secrets where the `secretref` value matches the secret name declared at the application level.
+The secret value is mapped to the secret name declared at the application level as described in the [defining secrets](#defining-secrets) section. The `passwordSecretRef` and `secretRef` parameters are used to reference the secret names as environment variables at the container level. The `passwordSecretRef` provides a descriptive parameter name for secrets containing passwords.
## Example
-The following example shows an application that declares a connection string at the application level and is used throughout the configuration via `secretref`.
+The following example shows an application that declares a connection string at the application level and is used throughout the configuration via `secretRef`.
# [ARM template](#tab/arm-template)
az containerapp create \
--environment "my-environment-name" \ --image demos/myQueueApp:v1 \ --secrets "queue-connection-string=$CONNECTIONSTRING" \
- --env-vars "QueueName=myqueue" "ConnectionString=secretref:queue-connection-string"
+ --env-vars "QueueName=myqueue" "ConnectionString=secretRef:queue-connection-string"
```
-Here, the environment variable named `connection-string` gets its value from the application-level `queue-connection-string` secret by using `secretref`.
+Here, the environment variable named `connection-string` gets its value from the application-level `queue-connection-string` secret by using `secretRef`.
# [PowerShell](#tab/powershell)
az containerapp create `
--environment "my-environment-name" ` --image demos/myQueueApp:v1 ` --secrets "queue-connection-string=$CONNECTIONSTRING" `
- --env-vars "QueueName=myqueue" "ConnectionString=secretref:queue-connection-string"
+ --env-vars "QueueName=myqueue" "ConnectionString=secretRef:queue-connection-string"
```
-Here, the environment variable named `connection-string` gets its value from the application-level `queue-connection-string` secret by using `secretref`.
+Here, the environment variable named `connection-string` gets its value from the application-level `queue-connection-string` secret by using `secretRef`.
container-apps Service Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/service-connector.md
+
+ Title: Connect a container app to a cloud service with Service Connector
+description: Learn to connect a container app to an Azure service using the Azure portal or the CLI.
++++ Last updated : 06/16/2022
+# Customer intent: As an app developer, I want to connect a containerized app to a storage account in the Azure portal using Service Connector.
++
+# How to connect a Container Apps instance to a backing service
+
+Azure Container Apps allows you to use Service Connector to connect to cloud services in just a few steps. Service Connector manages the configuration of the network settings and connection information between different services. To view all supported services, [learn more about Service Connector](../service-connector/overview.md#what-services-are-supported-in-service-connector).
+
+In this article, you learn to connect a container app to Azure Blob Storage.
+
+> [!IMPORTANT]
+> This feature in Container Apps is currently in preview.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/dotnet).
+- An application deployed to Container Apps in a [region supported by Service Connector](../service-connector/concept-region-support.md). If you don't have one yet, [create and deploy a container to Container Apps](quickstart-portal.md)
+- An Azure Blob Storage account
+
+## Sign in to Azure
+
+First, sign in to Azure.
+
+### [Portal](#tab/azure-portal)
+
+Sign in to the Azure portal at [https://portal.azure.com/](https://portal.azure.com/) with your Azure account.
+
+### [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az login
+```
+
+This command prompts your web browser to launch and load an Azure sign in page. If the browser fails to open, use device code flow with `az login --use-device-code`. For more sign in options, go to [sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+++
+## Create a new service connection
+
+Use Service Connector to create a new service connection in Container Apps using the Azure portal or the CLI.
+
+### [Portal](#tab/azure-portal)
+
+1. Navigate to the Azure portal.
+1. Select **All resources** on the left of the Azure portal.
+1. Enter **Container Apps** in the filter and select the name of the container app you want to use in the list.
+1. Select **Service Connector** from the left table of contents.
+1. Select **Create**.
+
+ :::image type="content" source="media/service-connector/connect-service-connector.png" alt-text="Screenshot of the Azure portal, selecting Service Connector within a container app." lightbox="media/service-connector/connect-service-connector-expanded.png":::
+
+1. Select or enter the following settings.
+
+ | Setting | Suggested value | Description |
+ | | | |
+ | **Container** | Your container name | Select your Container Apps. |
+ | **Service type** | Blob Storage | This is the target service type. If you don't have a Storage Blob container, you can [create one](../storage/blobs/storage-quickstart-blobs-portal.md) or use another service type. |
+ | **Subscription** | One of your subscriptions | The subscription containing your target service. The default value is the subscription for your container app. |
+ | **Connection name** | Generated unique name | The connection name that identifies the connection between your container app and target service. |
+ | **Storage account** | Your storage account name | The target storage account to which you want to connect. If you choose a different service type, select the corresponding target service instance. |
+ | **Client type** | The app stack in your selected container | Your application stack that works with the target service you selected. The default value is **none**, which generates a list of configurations. If you know about the app stack or the client SDK in the container you selected, select the same app stack for the client type. |
+
+1. Select **Next: Authentication** to select the authentication type. Then select **Connection string** to use access key to connect your Blob Storage account.
+
+1. Select **Next: Network** to select the network configuration. Then select **Enable firewall settings** to update firewall allowlist in Blob Storage so that your container apps can reach the Blob Storage.
+
+1. Then select **Next: Review + Create** to review the provided information. Running the final validation takes a few seconds. Then select **Create** to create the service connection. It might take a minute or so to complete the operation.
+
+### [Azure CLI](#tab/azure-cli)
+
+The following steps create a service connection using an access key or a system-assigned managed identity.
+
+1. Use the Azure CLI command `az containerapp connection list-support-types` to view all supported target services.
+
+ ```azurecli-interactive
+ az provider register -n Microsoft.ServiceLinker
+ az containerapp connection list-support-types --output table
+ ```
+
+1. Use the Azure CLI command `az containerapp connection connection create` to create a service connection from a container app.
+
+ If you're connecting with an access key, run the code below:
+
+ ```azurecli-interactive
+ az containerapp connection create storage-blob --secret
+ ```
+
+ If you're connecting with a system-assigned managed identity, run the code below:
+
+ ```azurecli-interactive
+ az containerapp connection create storage-blob --system-identity
+ ```
+
+1. Provide the following information at the Azure CLI's request:
+
+ - **The resource group which contains the container app**: the name of the resource group with the container app.
+ - **Name of the container app**: the name of your container app.
+ - **The container where the connection information will be saved**: the name of the container, in your container app, that connects to the target service
+ - **The resource group which contains the storage account:** the name of the resource group name with the storage account. In this guide, we're using a Blob Storage.
+ - **Name of the storage account**: the name of the storage account that contains your blob.
+
+ > [!IMPORTANT]
+ > To use Managed Identity, you must have the permission to manage [Azure Active Directory role assignments](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). If you don't have this permission, you won't be able to create a connection. You can ask your subscription owner to grant you this permission or use an access key instead to create the connection.
+
+ > [!NOTE]
+ > If you don't have a Blob Storage, you can run `az containerapp connection create storage-blob --new --secret` to provision a new one.
+++
+## View service connections in Container Apps
+
+View your existing service connections using the Azure portal or the CLI.
+
+### [Portal](#tab/azure-portal)
+
+1. In **Service Connector**, select **Refresh** and you'll see a Container Apps connection displayed.
+
+1. Select **>** to expand the list. You can see the environment variables required by your application code.
+
+1. Select **...** and then **Validate**. You can see the connection validation details in the pop-up panel on the right.
+
+ :::image type="content" source="media/service-connector/connect-service-connector-refresh.png" alt-text="Screenshot of the Azure portal, viewing connection validation details.":::
+
+### [Azure CLI](#tab/azure-cli)
+
+Use the Azure CLI command `az containerapp connection list` to list all your container app's provisioned connections. Provide the following information:
+
+- **Source compute service resource group name**: the resource group name of the container app.
+- **Container app name**: the name of your container app.
+
+```azurecli-interactive
+az containerapp connection list -g "<your-container-app-resource-group>" --name "<your-container-app-name>" --output table
+```
+
+The output also displays the provisioning state of your connections: failed or succeeded.
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Environments in Azure Container Apps](environment.md)
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
az network vnet subnet create `
-With the VNET established, you can now query for the VNET and infrastructure subnet ID.
+With the VNET established, you can now query for the infrastructure subnet ID.
# [Bash](#tab/bash)
-```bash
-VNET_RESOURCE_ID=`az network vnet show --resource-group ${RESOURCE_GROUP} --name ${VNET_NAME} --query "id" -o tsv | tr -d '[:space:]'`
-```
- ```bash INFRASTRUCTURE_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv | tr -d '[:space:]'` ``` # [PowerShell](#tab/powershell)
-```powershell
-$VNET_RESOURCE_ID=(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
-```
- ```powershell $INFRASTRUCTURE_SUBNET=(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv) ```
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
az network vnet subnet create `
-With the virtual network created, you can retrieve the IDs for both the VNET and the infrastructure subnet.
+With the virtual network created, you can retrieve the ID for the infrastructure subnet.
# [Bash](#tab/bash)
-```bash
-VNET_RESOURCE_ID=`az network vnet show --resource-group ${RESOURCE_GROUP} --name ${VNET_NAME} --query "id" -o tsv | tr -d '[:space:]'`
-```
- ```bash INFRASTRUCTURE_SUBNET=`az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv | tr -d '[:space:]'` ``` # [PowerShell](#tab/powershell)
-```powershell
-$VNET_RESOURCE_ID=(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
-```
- ```powershell $INFRASTRUCTURE_SUBNET=(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name infrastructure-subnet --query "id" -o tsv) ```
container-registry Container Registry Helm Repos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-helm-repos.md
To quickly manage and deploy applications for Kubernetes, you can use the [open-
This article shows you how to host Helm charts repositories in an Azure container registry, using Helm 3 commands and storing charts as [OCI artifacts](container-registry-image-formats.md#oci-artifacts). In many scenarios, you would build and upload your own charts for the applications you develop. For more information on how to build your own Helm charts, see the [Chart Template Developer's Guide][develop-helm-charts]. You can also store an existing Helm chart from another Helm repo. > [!IMPORTANT]
-> This article has been updated with Helm 3 commands as of version **3.7.1**. Helm 3.7.1 includes changes to Helm CLI commands and OCI support introduced in earlier versions of Helm 3.
+> This article has been updated with Helm 3 commands. Helm 3.7 includes changes to Helm CLI commands and OCI support introduced in earlier versions of Helm 3. By design `helm` moves forward with version. We recommend to use **3.7.2** or later.
## Helm 3 or Helm 2?
If you've previously stored and deployed charts using Helm 2 and Azure Container
The following resources are needed for the scenario in this article: - **An Azure container registry** in your Azure subscription. If needed, create a registry using the [Azure portal](container-registry-get-started-portal.md) or the [Azure CLI](container-registry-get-started-azure-cli.md).-- **Helm client version 3.7.1 or later** - Run `helm version` to find your current version. For more information on how to install and upgrade Helm, see [Installing Helm][helm-install]. If you upgrade from an earlier version of Helm 3, review the [release notes](https://github.com/helm/helm/releases).
+- **Helm client version 3.7 or later** - Run `helm version` to find your current version. For more information on how to install and upgrade Helm, see [Installing Helm][helm-install]. If you upgrade from an earlier version of Helm 3, review the [release notes](https://github.com/helm/helm/releases).
- **A Kubernetes cluster** where you will install a Helm chart. If needed, create an AKS cluster [using the Azure CLI][./learn/quick-kubernetes-deploy-cli], [using Azure PowerShell][./learn/quick-kubernetes-deploy-powershell], or [using the Azure portal][./learn/quick-kubernetes-deploy-portal]. - **Azure CLI version 2.0.71 or later** - Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
Output is similar to:
Run the [az acr repository show-manifests][az-acr-repository-show-manifests] command to see details of the chart stored in the repository. For example: ```azurecli
-az acr repository show-manifests \
- --name $ACR_NAME \
- --repository helm/hello-world --detail
+az acr manifest list-metadata \
+ --registry $ACR_NAME \
+ --name helm/hello-world --detail
``` Output, abbreviated in this example, shows a `configMediaType` of `application/vnd.cncf.helm.config.v1+json`:
cosmos-db Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/data-residency.md
# How to meet data residency requirements in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-In Azure Cosmos DB, you can configure your data and backups to remain in a single region to meet the[ residency requirements.](https://azure.microsoft.com/global-infrastructure/data-residency/)
+In Azure Cosmos DB, you can configure your data and backups to remain in a single region to meet the [residency requirements](https://azure.microsoft.com/global-infrastructure/data-residency/).
## Residency requirements for data
cosmos-db Managed Identity Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/managed-identity-based-authentication.md
You'll learn how to create a function app that can access Azure Cosmos DB data w
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - An existing Azure Cosmos DB SQL API account. [Create an Azure Cosmos DB SQL API account](sql/create-cosmosdb-resources-portal.md) - An existing Azure Functions function app. [Create your first function in the Azure portal](../azure-functions/functions-create-function-app-portal.md)
- - A system-assigned managed identity for the function app. [Add a system-assigned identity](/app-service/overview-managed-identity.md?tabs=cli#add-a-system-assigned-identity)
+ - A system-assigned managed identity for the function app. [Add a system-assigned identity](../app-service/overview-managed-identity.md#add-a-system-assigned-identity)
- [Azure Functions Core Tools](../azure-functions/functions-run-local.md) - To perform the steps in this article, install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in to Azure](/cli/azure/authenticate-azure-cli).
You'll learn how to create a function app that can access Azure Cosmos DB data w
> [!NOTE] > These variables will be re-used in later steps. This example assumes your Azure Cosmos DB account name is ``msdocs-cosmos-app``, your function app name is ``msdocs-function-app`` and your resource group name is ``msdocs-cosmos-functions-dotnet-identity``.
-1. View the function app's properties using the [``az functionapp show``](/cli/azure/functionapp&preserve-view=true#az-functionapp-show) command.
+1. View the function app's properties using the [``az functionapp show``](/cli/azure/functionapp#az-functionapp-show) command.
```azurecli-interactive az functionapp show \
cosmos-db How To Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-get-started.md
+
+ Title: Get started with Azure Cosmos DB MongoDB API and JavaScript
+description: Get started developing a JavaScript application that works with Azure Cosmos DB MongoDB API. This article helps you learn how to set up a project and configure access to an Azure Cosmos DB MongoDB API database.
++++
+ms.devlang: javascript
+ Last updated : 06/23/2022++++
+# Get started with Azure Cosmos DB MongoDB API and JavaScript
+
+This article shows you how to connect to Azure Cosmos DB MongoDB API using the native MongoDB npm package. Once connected, you can perform operations on databases, collections, and docs.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
+
+[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
++
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* [Node.js LTS](https://nodejs.org/en/download/)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
+* [Azure Cosmos DB MongoDB API resource](quickstart-javascript.md#create-an-azure-cosmos-db-account)
+
+## Create a new JavaScript app
+
+1. Create a new JavaScript application in an empty folder using your preferred terminal. Use the [``npm init``](https://docs.npmjs.com/cli/v8/commands/npm-init) command to begin the prompts to create the `package.json` file. Accept the defaults for the prompts.
+
+ ```console
+ npm init
+ ```
+
+2. Add the [MongoDB](https://www.npmjs.com/package/mongodb) npm package to the JavaScript project. Use the [``npm install package``](https://docs.npmjs.com/cli/v8/commands/npm-install) command specifying the name of the npm package. The `dotenv` package is used to read the environment variables from a `.env` file during local development.
+
+ ```console
+ npm install mongodb dotenv
+ ```
+
+3. To run the app, use a terminal to navigate to the application directory and run the application.
+
+ ```console
+ node index.js
+ ```
+
+## Connect with MongoDB native driver to Azure Cosmos DB MongoDB API
+
+To connect with the MongoDB native driver to Azure Cosmos DB, create an instance of the [``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html#connect) class. This class is the starting point to perform all operations against databases.
+
+The most common constructor for **MongoClient** has two parameters:
+
+| Parameter | Example value | Description |
+| | | |
+| ``url`` | ``COSMOS_CONNECTION_STRIN`` environment variable | MongoDB API connection string to use for all requests |
+| ``options`` | `{ssl: true, tls: true, }` | [MongoDB Options](https://mongodb.github.io/node-mongodb-native/4.5/interfaces/MongoClientOptions.html) for the connection. |
+
+Refer to the [Troubleshooting guide](error-codes-solutions.md) for connection issues.
+
+## Get resource name
+
+### [Azure CLI](#tab/azure-cli)
++
+### [PowerShell](#tab/azure-powershell)
++
+### [Portal](#tab/azure-portal)
+
+Skip this step and use the information for the portal in the next step.
+++
+## Retrieve your connection string
+
+### [Azure CLI](#tab/azure-cli)
++
+### [PowerShell](#tab/azure-powershell)
++
+### [Portal](#tab/azure-portal)
+
+> [!TIP]
+> For this guide, we recommend using the resource group name ``msdocs-cosmos``.
++++
+## Configure environment variables
++
+## Create MongoClient with connection string
++
+1. Add dependencies to reference the MongoDB and DotEnv npm packages.
+
+ :::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/101-client-connection-string/index.js" id="package_dependencies":::
+
+2. Define a new instance of the ``MongoClient,`` class using the constructor, and [``process.env.``](https://nodejs.org/dist/latest-v8.x/docs/api/process.html#process_process_env) to use the connection string.
+
+ :::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/101-client-connection-string/index.js" id="client_credentials":::
+
+For more information on different ways to create a ``MongoClient`` instance, see [MongoDB NodeJS Driver Quick Start](https://www.npmjs.com/package/mongodb#quick-start).
+
+## Close the MongoClient connection
+
+When your application is finished with the connection remember to close it. That `.close()` call should be after all database calls are made.
+
+```javascript
+client.close()
+```
+
+## Use MongoDB client classes with Cosmos DB for MongoDB API
++
+Each type of resource is represented by one or more associated JavaScript classes. Here's a list of the most common classes:
+
+| Class | Description |
+|||
+|[``MongoClient``](https://mongodb.github.io/node-mongodb-native/4.5/classes/MongoClient.html)|This class provides a client-side logical representation for the MongoDB API layer on Cosmos DB. The client object is used to configure and execute requests against the service.|
+|[``Db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html)|This class is a reference to a database that may, or may not, exist in the service yet. The database is validated server-side when you attempt to access it or perform an operation against it.|
+|[``Collection``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html)|This class is a reference to a collection that also may not exist in the service yet. The collection is validated server-side when you attempt to work with it.|
+
+The following guides show you how to use each of these classes to build your application.
+
+**Guide**:
+
+* [Manage databases](how-to-javascript-manage-databases.md)
+* [Manage collections](how-to-javascript-manage-collections.md)
+* [Manage documents](how-to-javascript-manage-documents.md)
+* [Use queries to find documents](how-to-javascript-manage-queries.md)
+
+## See also
+
+- [Package (NuGet)](https://www.nuget.org/packages/Microsoft.Azure.Cosmos)
+- [API reference](https://docs.mongodb.com/drivers/node)
+
+## Next steps
+
+Now that you've connected to a MongoDB API account, use the next guide to create and manage databases.
+
+> [!div class="nextstepaction"]
+> [Create a database in Azure Cosmos DB MongoDB API using JavaScript](how-to-javascript-manage-databases.md)
cosmos-db How To Javascript Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-collections.md
+
+ Title: Create a collection in Azure Cosmos DB MongoDB API using JavaScript
+description: Learn how to work with a collection in your Azure Cosmos DB MongoDB API database using the JavaScript SDK.
++++
+ms.devlang: javascript
+ Last updated : 06/23/2022+++
+# Manage a collection in Azure Cosmos DB MongoDB API using JavaScript
++
+Manage your MongoDB collection stored in Cosmos DB with the native MongoDB client driver.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
+
+[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
++
+## Name a collection
+
+In Azure Cosmos DB, a collection is analogous to a table in a relational database. When you create a collection, the collection name forms a segment of the URI used to access the collection resource and any child docs.
+
+Here are some quick rules when naming a collection:
+
+* Keep collection names between 3 and 63 characters long
+* Collection names can only contain lowercase letters, numbers, or the dash (-) character.
+* Container names must start with a lowercase letter or number.
+
+## Get collection instance
+
+Use an instance of the **Collection** class to access the collection on the server.
+
+* [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
+
+The following code snippets assume you've already created your [client connection](how-to-javascript-get-started.md#create-mongoclient-with-connection-string) and that you [close your client connection](how-to-javascript-get-started.md#close-the-mongoclient-connection) after these code snippets.
+
+## Create a collection
+
+To create a collection, insert a document into the collection.
+
+* [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html#collection)
+* [MongoClient.Db.Collection.insertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)
+* [MongoClient.Db.Collection.insertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
++
+## Drop a collection
+
+* [MongoClient.Db.dropCollection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropCollection)
+
+Drop the collection from the database to remove it permanently. However, the next insert or update operation that accesses the collection will create a new collection with that name.
++
+The preceding code snippet displays the following example console output:
++
+## Get collection indexes
+
+An index is used by the MongoDB query engine to improve performance to database queries.
+
+* [MongoClient.Db.Collection.indexes](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#indexes)
++
+The preceding code snippet displays the following example console output:
++
+## See also
+
+- [Get started with Azure Cosmos DB MongoDB API and JavaScript](how-to-javascript-get-started.md)
+- [Create a database](how-to-javascript-manage-databases.md)
cosmos-db How To Javascript Manage Databases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-databases.md
+
+ Title: Manage a MongoDB database using JavaScript
+description: Learn how to manage your Cosmos DB resource when it provides the MongoDB API with a JavaScript SDK.
++++
+ms.devlang: javascript
+ Last updated : 06/23/2022+++
+# Manage a MongoDB database using JavaScript
++
+Your MongoDB server in Azure Cosmos DB is available from the common npm packages for MongoDB such as:
+
+* [MongoDB](https://www.npmjs.com/package/mongodb)
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
+
+[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
+
+## Name a database
+
+In Azure Cosmos DB, a database is analogous to a namespace. When you create a database, the database name forms a segment of the URI used to access the database resource and any child resources.
+
+Here are some quick rules when naming a database:
+
+* Keep database names between 3 and 63 characters long
+* Database names can only contain lowercase letters, numbers, or the dash (-) character.
+* Database names must start with a lowercase letter or number.
+
+Once created, the URI for a database is in this format:
+
+``https://<cosmos-account-name>.documents.azure.com/dbs/<database-name>``
+
+## Get database instance
+
+The database holds the collections and their documents. Use an instance of the **Db** class to access the databases on the server.
+
+* [MongoClient.Db](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
+
+The following code snippets assume you've already created your [client connection](how-to-javascript-get-started.md#create-mongoclient-with-connection-string) and that you [close your client connection](how-to-javascript-get-started.md#close-the-mongoclient-connection) after these code snippets.
+
+## Get server information
+
+Access the **Admin** class to retrieve server information. You don't need to specify the database name in the `db` method. The information returned is specific to MongoDB and doesn't represent the Azure Cosmos DB platform itself.
+
+* [MongoClient.Db.Admin](https://mongodb.github.io/node-mongodb-native/4.7/classes/Admin.html)
++
+The preceding code snippet displays the following example console output:
++
+## Does database exist?
+
+The native MongoDB driver for JavaScript creates the database if it doesn't exist when you access it. If you would prefer to know if the database already exists before using it, get the list of current databases and filter for the name:
+
+* [MongoClient.Db.Admin.listDatabases](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
++
+The preceding code snippet displays the following example console output:
++
+## Get list of databases, collections, and document count
+
+When you manage your MongoDB server programmatically, it's helpful to know what databases and collections are on the server and how many documents in each collection.
+
+* [MongoClient.Db.Admin.listDatabases](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html)
+* [MongoClient.Db.listCollections](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#listCollections)
+* [MongoClient.Db.Collection](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html)
+* [MongoClient.Db.Collection.countDocuments](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#countDocuments)
++
+The preceding code snippet displays the following example console output:
++
+## Get database object instance
+
+To get a database object instance, call the following method. This method accepts an optional database name and can be part of a chain.
+
+* [``MongoClient.Db``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Db.html)
+
+A database is created when it is accessed. The most common way to access a new database is to add a document to a collection. In one line of code using chained objects, the database, collection, and doc are created.
+
+```javascript
+const insertOneResult = await client.db("adventureworks").collection("products").insertOne(doc);
+```
+
+Learn more about working with [collections](how-to-javascript-manage-collections.md) and documents.
+
+## Drop a database
+
+A database is removed from the server using the dropDatabase method on the DB class.
+
+* [DB.dropDatabase](https://mongodb.github.io/node-mongodb-native/4.7/classes/Db.html#dropDatabase)
++
+The preceding code snippet displays the following example console output:
++
+## See also
+
+- [Get started with Azure Cosmos DB MongoDB API and JavaScript](how-to-javascript-get-started.md)
+- Work with a collection](how-to-javascript-manage-collections.md)
cosmos-db How To Javascript Manage Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-documents.md
+
+ Title: Create a document in Azure Cosmos DB MongoDB API using JavaScript
+description: Learn how to work with a document in your Azure Cosmos DB MongoDB API database using the JavaScript SDK.
++++
+ms.devlang: javascript
+ Last updated : 06/23/2022+++
+# Manage a document in Azure Cosmos DB MongoDB API using JavaScript
++
+Manage your MongoDB documents with the ability to insert, update, and delete documents.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
+
+[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
+
+## Insert a document
+
+Insert a document, defined with a JSON schema, into your collection.
+
+* [MongoClient.Db.Collection.insertOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertOne)
+* [MongoClient.Db.Collection.insertMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#insertMany)
++
+The preceding code snippet displays the following example console output:
++
+## Document ID
+
+If you don't provide an ID, `_id`, for your document, one is created for you as a BSON object. The value of the provided ID is accessed with the ObjectId method.
+
+* [ObjectId](https://mongodb.github.io/node-mongodb-native/4.7/classes/ObjectId.html)
+
+Use the ID to query for documents:
+
+```javascript
+const query = { _id: ObjectId("62b1f43a9446918500c875c5")};
+```
+
+## Update a document
+
+To update a document, specify the query used to find the document along with a set of properties of the document that should be updated. You can choose to upsert the document, which inserts the document if it doesn't already exist.
+
+* [MongoClient.Db.Collection.updateOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#updateOne)
+* [MongoClient.Db.Collection.updateMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#updateMany)
+++
+The preceding code snippet displays the following example console output for an insert:
++
+The preceding code snippet displays the following example console output for an update:
++
+## Bulk updates to a collection
+
+You can perform several operations at once with the **bulkWrite** operation. Learn more about how to [optimize bulk writes for Cosmos DB](optimize-write-performance.md#tune-for-the-optimal-batch-size-and-thread-count).
+
+The following bulk operations are available:
+
+* [MongoClient.Db.Collection.bulkWrite](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#bulkWrite)
+
+ * insertOne
+ * updateOne
+ * updateMany
+ * deleteOne
+ * deleteMany
++
+The preceding code snippet displays the following example console output:
++
+## Delete a document
+
+To delete documents, use a query to define how the documents are found.
+
+* [MongoClient.Db.Collection.deleteOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#deleteOne)
+* [MongoClient.Db.Collection.deleteMany](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#deleteMany)
++
+The preceding code snippet displays the following example console output:
++
+## See also
+
+- [Get started with Azure Cosmos DB MongoDB API and JavaScript](how-to-javascript-get-started.md)
+- [Create a database](how-to-javascript-manage-databases.md)
cosmos-db How To Javascript Manage Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-javascript-manage-queries.md
+
+ Title: Use a query in Azure Cosmos DB MongoDB API using JavaScript
+description: Learn how to use a query in your Azure Cosmos DB MongoDB API database using the JavaScript SDK.
++++
+ms.devlang: javascript
+ Last updated : 06/23/2022+++
+# Use a query in Azure Cosmos DB MongoDB API using JavaScript
++
+Use queries to find documents in a collection.
+
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-mongodb-api-javascript-samples) are available on GitHub as a JavaScript project.
+
+[MongoDB API reference documentation](https://docs.mongodb.com/drivers/node) | [MongoDB Package (npm)](https://www.npmjs.com/package/mongodb)
++
+## Query for documents
+
+To find documents, use a query to define how the documents are found.
+
+* [MongoClient.Db.Collection.findOne](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#findOne)
+* [MongoClient.Db.Collection.find](https://mongodb.github.io/node-mongodb-native/4.7/classes/Collection.html#find)
+* [FindCursor](https://mongodb.github.io/node-mongodb-native/4.7/classes/FindCursor.html)
++
+The preceding code snippet displays the following example console output:
++
+## See also
+
+- [Get started with Azure Cosmos DB MongoDB API and JavaScript](how-to-javascript-get-started.md)
+- [Create a database](how-to-javascript-manage-databases.md)
cosmos-db Quickstart Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-javascript.md
This quickstart will create a single Azure Cosmos DB account using the MongoDB A
#### [Azure CLI](#tab/azure-cli)
-1. Create shell variables for *accountName*, *resourceGroupName*, and *location*.
-
- ```azurecli-interactive
- # Variable for resource group name
- resourceGroupName="msdocs-cosmos-javascript-quickstart-rg"
- location="westus"
-
- # Variable for account name with a randomly generated suffix
- let suffix=$RANDOM*$RANDOM
- accountName="msdocs-javascript-$suffix"
- ```
-
-1. If you haven't already, sign in to the Azure CLI using the [``az login``](/cli/azure/reference-index#az-login) command.
-
-1. Use the [``az group create``](/cli/azure/group#az-group-create) command to create a new resource group in your subscription.
-
- ```azurecli-interactive
- az group create \
- --name $resourceGroupName \
- --location $location
- ```
-
-1. Use the [``az cosmosdb create``](/cli/azure/cosmosdb#az-cosmosdb-create) command to create a new Azure Cosmos DB MongoDB API account with default settings.
-
- ```azurecli-interactive
- az cosmosdb create \
- --resource-group $resourceGroupName \
- --name $accountName \
- --locations regionName=$location
- --kind MongoDB
- ```
-
-1. Find the MongoDB API **connection string** from the list of connection strings for the account with the[``az cosmosdb list-connection-strings``](/cli/azure/cosmosdb#az-cosmosdb-list-connection-strings) command.
-
- ```azurecli-interactive
- az cosmosdb list-connection-strings \
- --resource-group $resourceGroupName \
- --name $accountName
- ```
-
-1. Record the *PRIMARY KEY* values. You'll use these credentials later.
#### [PowerShell](#tab/azure-powershell)
-1. Create shell variables for *ACCOUNT_NAME*, *RESOURCE_GROUP_NAME*, and **LOCATION**.
-
- ```azurepowershell-interactive
- # Variable for resource group name
- $RESOURCE_GROUP_NAME = "msdocs-cosmos-javascript-quickstart-rg"
- $LOCATION = "West US"
-
- # Variable for account name with a randomnly generated suffix
- $SUFFIX = Get-Random
- $ACCOUNT_NAME = "msdocs-javascript-$SUFFIX"
- ```
-
-1. If you haven't already, sign in to Azure PowerShell using the [``Connect-AzAccount``](/powershell/module/az.accounts/connect-azaccount) cmdlet.
-
-1. Use the [``New-AzResourceGroup``](/powershell/module/az.resources/new-azresourcegroup) cmdlet to create a new resource group in your subscription.
-
- ```azurepowershell-interactive
- $parameters = @{
- Name = $RESOURCE_GROUP_NAME
- Location = $LOCATION
- }
- New-AzResourceGroup @parameters
- ```
-
-1. Use the [``New-AzCosmosDBAccount``](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet to create a new Azure Cosmos DB MongoDB API account with default settings.
-
- ```azurepowershell-interactive
- $parameters = @{
- ResourceGroupName = $RESOURCE_GROUP_NAME
- Name = $ACCOUNT_NAME
- Location = $LOCATION
- Kind = "MongoDB"
- }
- New-AzCosmosDBAccount @parameters
- ```
-
-1. Find the *CONNECTION STRING* from the list of keys and connection strings for the account with the [``Get-AzCosmosDBAccountKey``](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
-
- ```azurepowershell-interactive
- $parameters = @{
- ResourceGroupName = $RESOURCE_GROUP_NAME
- Name = $ACCOUNT_NAME
- Type = "ConnectionStrings"
- }
- Get-AzCosmosDBAccountKey @parameters |
- Select-Object -Property "Primary MongoDB Connection String"
- ```
-
-1. Record the *CONNECTION STRING* value. You'll use these credentials later.
#### [Portal](#tab/azure-portal)
-> [!TIP]
-> For this quickstart, we recommend using the resource group name ``msdocs-cosmos-javascript-quickstart-rg``.
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. From the Azure portal menu or the **Home page**, select **Create a resource**.
-
-1. On the **New** page, search for and select **Azure Cosmos DB**.
-
-1. On the **Select API option** page, select the **Create** option within the **MongoDB** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the MongoDB API](/azure/cosmos-db/mongodb/mongodb-introduction).
-
- :::image type="content" source="media/quickstart-javascript/cosmos-api-choices.png" lightbox="media/quickstart-javascript/cosmos-api-choices.png" alt-text="Screenshot of select A P I option page for Azure Cosmos D B.":::
-
-1. On the **Create Azure Cosmos DB Account** page, enter the following information:
- | Setting | Value | Description |
- | | | |
- | Subscription | Subscription name | Select the Azure subscription that you wish to use for this Azure Cosmos account. |
- | Resource Group | Resource group name | Select a resource group, or select **Create new**, then enter a unique name for the new resource group. |
- | Account Name | A unique name | Enter a name to identify your Azure Cosmos account. The name will be used as part of a fully qualified domain name (FQDN) with a suffix of *documents.azure.com*, so the name must be globally unique. The name can only contain lowercase letters, numbers, and the hyphen (-) character. The name must also be between 3-44 characters in length. |
- | Location | The region closest to your users | Select a geographic location to host your Azure Cosmos DB account. Use the location that is closest to your users to give them the fastest access to the data. |
- | Capacity mode |Provisioned throughput or Serverless|Select **Provisioned throughput** to create an account in [provisioned throughput](../set-throughput.md) mode. Select **Serverless** to create an account in [serverless](../serverless.md) mode. |
- | Apply Azure Cosmos DB free tier discount | **Apply** or **Do not apply** |With Azure Cosmos DB free tier, you'll get the first 1000 RU/s and 25 GB of storage for free in an account. Learn more about [free tier](https://azure.microsoft.com/pricing/details/cosmos-db/). |
- | Version | MongoDB version | Select the MongoDB server version that matches your application requirements.
-
- > [!NOTE]
- > You can have up to one free tier Azure Cosmos DB account per Azure subscription and must opt-in when creating the account. If you do not see the option to apply the free tier discount, this means another account in the subscription has already been enabled with free tier.
-
- :::image type="content" source="media/quickstart-javascript/new-cosmos-account-page.png" lightbox="media/quickstart-javascript/new-cosmos-account-page.png" alt-text="Screenshot of new account page for Azure Cosmos D B SQL A P I.":::
+
-1. Select **Review + create**.
+### Get MongoDB connection string
-1. Review the settings you provide, and then select **Create**. It takes a few minutes to create the account. Wait for the portal page to display **Your deployment is complete** before moving on.
+#### [Azure CLI](#tab/azure-cli)
-1. Select **Go to resource** to go to the Azure Cosmos DB account page.
- :::image type="content" source="media/quickstart-javascript/cosmos-deployment-complete.png" lightbox="media/quickstart-javascript/cosmos-deployment-complete.png" alt-text="Screenshot of deployment page for Azure Cosmos D B SQL A P I resource.":::
+#### [PowerShell](#tab/azure-powershell)
-1. From the Azure Cosmos DB SQL API account page, select the **Connection String** navigation menu option.
-1. Record the values for the **PRIMARY CONNECTION STRING** field. You'll use this value in a later step.
+#### [Portal](#tab/azure-portal)
- :::image type="content" source="media/quickstart-javascript/cosmos-endpoint-key-credentials.png" lightbox="media/quickstart-javascript/cosmos-endpoint-key-credentials.png" alt-text="Screenshot of Keys page with various credentials for an Azure Cosmos D B SQL A P I account.":::
npm install mongodb dotenv
### Configure environment variables
-To use the **CONNECION STRING** values within your JavaScript code, set this value on the local machine running the application. To set the environment variable, use your preferred terminal to run the following commands:
-
-#### [Windows](#tab/windows)
-
-```powershell
-$env:COSMOS_CONNECTION_STRING = "<cosmos-connection-string>"
-```
-
-#### [Linux / macOS](#tab/linux+macos)
-
-```bash
-export COSMOS_CONNECTION_STRING="<cosmos-connection-string>"
-```
-
-#### [.env](#tab/dotenv)
-
-A `.env` file is a standard way to store environment variables in a project. Create a `.env` file in the root of your project. Add the following lines to the `.env` file:
-
-```dotenv
-COSMOS_CONNECTION_STRING="<cosmos-connection-string>"
-```
-- ## Object model
cosmos-db Partners Migration Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partners-migration-cosmosdb.md
From NoSQL migration to application development, you can choose from a variety o
|[Altoros Development LLC](https://www.altoros.com/) | IoT, Personalization Retail (inventory), Serverless architectures NoSQL migration, App development| USA | |[Avanade](https://www.avanade.com/) | IoT, Retail (inventory), Serverless Architecture, App development | Austria, Germany, Switzerland, Italy, Norway, Spain, UK, Canada | |[Accenture](https://www.accenture.com/) | IoT, Retail (inventory), Serverless Architecture, App development |Global|
-|[Capax Global LLC](https://www.capaxglobal.com/) | IoT, Personalization, Retail (inventory), Operational Analytics (Spark), Serverless architecture, App development| USA |
+|Capax Global LLC | IoT, Personalization, Retail (inventory), Operational Analytics (Spark), Serverless architecture, App development| USA |
| [Capgemini](https://www.capgemini.com/) | Retail (inventory), IoT, Operational Analytics (Spark), App development | USA, France, UK, Netherlands, Finland | | [Cognizant](https://www.cognizant.com/) | IoT, Personalization, Retail (inventory), Operational Analytics (Spark), App development |USA, Canada, UK, Denmark, Netherlands, Switzerland, Australia, Japan | |[Infosys](https://www.infosys.com/) | App development | USA |
cosmos-db Create Sql Api Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-spark.md
If you are using our older Spark 2.4 Connector, you can find out how to migrate
* Azure Cosmos DB Apache Spark 3 OLTP Connector for Core (SQL) API: [Release notes and resources](sql-api-sdk-java-spark-v3.md) * Learn more about [Apache Spark](https://spark.apache.org/).
+* Learn how to configure [throughput control](throughput-control-spark.md).
* Check out more [samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples).
cosmos-db Defender For Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/defender-for-cosmos-db.md
-# Microsoft Defender for Cosmos DB
+# Microsoft Defender for Azure Cosmos DB
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-Microsoft Defender for Cosmos DB provides an extra layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit Azure Cosmos DB accounts. This layer of protection allows you to address threats, even without being a security expert, and integrate them with central security monitoring systems.
+Microsoft Defender for Azure Cosmos DB provides an extra layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit Azure Cosmos DB accounts. This layer of protection allows you to address threats, even without being a security expert, and integrate them with central security monitoring systems.
Security alerts are triggered when anomalies in activity occur. These security alerts show up in [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/). Subscription administrators also get these alerts over email, with details of the suspicious activity and recommendations on how to investigate and remediate the threats. > [!NOTE] >
-> * Microsoft Defender for Cosmos DB is currently available only for the Core (SQL) API.
-> * Microsoft Defender for Cosmos DB is not currently available in Azure government and sovereign cloud regions.
+> * Microsoft Defender for Azure Cosmos DB is currently available only for the Core (SQL) API.
+> * Microsoft Defender for Azure Cosmos DB is not currently available in Azure government and sovereign cloud regions.
For a full investigation experience of the security alerts, we recommended enabling [diagnostic logging in Azure Cosmos DB](../monitor-cosmos-db.md), which logs operations on the database itself, including CRUD operations on all documents, containers, and databases. ## Threat types
-Microsoft Defender for Cosmos DB detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. It can currently trigger the following alerts:
+Microsoft Defender for Azure Cosmos DB detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. It can currently trigger the following alerts:
- **Potential SQL injection attacks**: Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks canΓÇÖt work in Azure Cosmos DB. However, there are some variations of SQL injections that can succeed and may result in exfiltrating data from your Azure Cosmos DB accounts. Defender for Azure Cosmos DB detects both successful and failed attempts, and helps you harden your environment to prevent these threats.
Microsoft Defender for Cosmos DB detects anomalous activities indicating unusual
- **Suspicious database activity**: For example, suspicious key-listing patterns that resemble known malicious lateral movement techniques and suspicious data extraction patterns.
-## Configure Microsoft Defender for Cosmos DB
+## Configure Microsoft Defender for Azure Cosmos DB
-You can configure Microsoft Defender protection in any of several ways, described in the following sections.
-
-# [Portal](#tab/azure-portal)
-
-1. Launch the Azure portal at [https://portal.azure.com](https://portal.azure.com/).
-
-2. From the Azure Cosmos DB account, from the **Settings** menu, select **Microsoft Defender for Cloud**.
-
- :::image type="content" source="./media/defender-for-cosmos-db/cosmos-db-atp.png" alt-text="Set up Azure Defender for Cosmos DB" border="true":::
-
-3. In the **Microsoft Defender for Cloud** configuration blade:
-
- * Change the option from **OFF** to **ON**.
- * Click **Save**.
-
-# [REST API](#tab/rest-api)
-
-Use REST API commands to create, update, or get the Azure Defender setting for a specific Azure Cosmos DB account.
-
-* [Advanced Threat Protection - Create](/rest/api/securitycenter/advancedthreatprotection/create)
-* [Advanced Threat Protection - Get](/rest/api/securitycenter/advancedthreatprotection/get)
-
-# [PowerShell](#tab/azure-powershell)
-
-Use the following PowerShell cmdlets:
-
-* [Enable Advanced Threat Protection](/powershell/module/az.security/enable-azsecurityadvancedthreatprotection)
-* [Get Advanced Threat Protection](/powershell/module/az.security/get-azsecurityadvancedthreatprotection)
-* [Disable Advanced Threat Protection](/powershell/module/az.security/disable-azsecurityadvancedthreatprotection)
-
-# [ARM template](#tab/arm-template)
-
-Use an Azure Resource Manager (ARM) template to set up Azure Cosmos DB with Azure Defender protection enabled. For more information, see
-[Create a Cosmos DB Account with Advanced Threat Protection](https://azure.microsoft.com/resources/templates/microsoft-defender-cosmosdb-create-account/).
-
-# [Azure Policy](#tab/azure-policy)
-
-Use an Azure Policy to enable Azure Defender for Cosmos DB.
-
-1. Launch the Azure **Policy - Definitions** page, and search for the **Deploy Advanced Threat Protection for Cosmos DB** policy.
-
- :::image type="content" source="./media/defender-for-cosmos-db/cosmos-db.png" alt-text="Search Policy":::
-
-1. Click on the **Deploy Advanced Threat Protection for CosmosDB** policy, and then click **Assign**.
-
- :::image type="content" source="./media/defender-for-cosmos-db/cosmos-db-atp-policy.png" alt-text="Select Subscription Or Group":::
-
-1. From the **Scope** field, click the three dots, select an Azure subscription or resource group, and then click **Select**.
-
- :::image type="content" source="./media/defender-for-cosmos-db/cosmos-db-atp-details.png" alt-text="Policy Definitions Page":::
-
-1. Enter the other parameters, and click **Assign**.
--
+See [Enable Microsoft Defender for Azure Cosmos DB](../../defender-for-cloud/defender-for-databases-enable-cosmos-protections.md).
## Manage security alerts
When Azure Cosmos DB activity anomalies occur, a security alert is triggered wit
## Next steps
-* Learn more about [Microsoft Defender for Cosmos DB](../../defender-for-cloud/concept-defender-for-cosmos.md)
+* Learn more about [Microsoft Defender for Azure Cosmos DB](../../defender-for-cloud/concept-defender-for-cosmos.md)
* Learn more about [Diagnostic logging in Azure Cosmos DB](../cosmosdb-monitor-resource-logs.md)
cosmos-db How To Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-create-account.md
Create a single Azure Cosmos DB account using the SQL API.
1. On the **New** page, search for and select **Azure Cosmos DB**.
-1. On the **Select API option** page, select the **Create** option within the **Core (SQL) - Recommend** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the SQL API](/azure/cosmos-db/sql/introduction.md).
+1. On the **Select API option** page, select the **Create** option within the **Core (SQL) - Recommend** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the SQL API](../index.yml).
:::image type="content" source="media/create-account-portal/cosmos-api-choices.png" lightbox="media/create-account-portal/cosmos-api-choices.png" alt-text="Screenshot of select A P I option page for Azure Cosmos D B.":::
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-get-started.md
The most common constructor for **CosmosClient** has two parameters:
--query "documentEndpoint" ```
-1. Find the *PRIMARY KEY* from the list of keys for the account with the[``az-cosmosdb-keys-list``](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
+1. Find the *PRIMARY KEY* from the list of keys for the account with the [`az-cosmosdb-keys-list`](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
```azurecli-interactive az cosmosdb keys list \
Another constructor for **CosmosClient** only contains a single parameter:
) ```
-1. Find the *PRIMARY CONNECTION STRING* from the list of connection strings for the account with the[``az-cosmosdb-keys-list``](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
+1. Find the *PRIMARY CONNECTION STRING* from the list of connection strings for the account with the [`az-cosmosdb-keys-list`](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
```azurecli-interactive az cosmosdb keys list \
cosmos-db Kafka Connector Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/kafka-connector-source.md
curl -H "Content-Type: application/json" -X POST -d @<path-to-JSON-config-file>
### Confirm data written to Kafka topic
-1. Open Kafka Topic UI on `<http://localhost:9000>`.
+1. Open Kafka Topic UI on `http://localhost:9000`.
1. Select the Kafka "apparels" topic you created. 1. Verify that the document you inserted into Azure Cosmos DB earlier appears in the Kafka topic.
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quickstart-dotnet.md
This quickstart will create a single Azure Cosmos DB account using the SQL API.
--query "documentEndpoint" ```
-1. Find the *PRIMARY KEY* from the list of keys for the account with the[``az-cosmosdb-keys-list``](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
+1. Find the *PRIMARY KEY* from the list of keys for the account with the [`az-cosmosdb-keys-list`](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command.
```azurecli-interactive az cosmosdb keys list \
This quickstart will create a single Azure Cosmos DB account using the SQL API.
1. On the **New** page, search for and select **Azure Cosmos DB**.
-1. On the **Select API option** page, select the **Create** option within the **Core (SQL) - Recommend** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the SQL API](/azure/cosmos-db/sql/introduction.md).
+1. On the **Select API option** page, select the **Create** option within the **Core (SQL) - Recommend** section. Azure Cosmos DB has five APIs: SQL, MongoDB, Gremlin, Table, and Cassandra. [Learn more about the SQL API](../index.yml).
:::image type="content" source="media/create-account-portal/cosmos-api-choices.png" lightbox="media/create-account-portal/cosmos-api-choices.png" alt-text="Screenshot of select A P I option page for Azure Cosmos D B.":::
cosmos-db Throughput Control Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/throughput-control-spark.md
+
+ Title: Azure Cosmos DB Spark Connector - Throughput Control
+description: Learn about controlling throughput for bulk data movements in the Azure Cosmos DB Spark Connector
++++ Last updated : 06/22/2022++++
+# Azure Cosmos DB Spark Connector - throughput control
+
+The [Spark Connector](create-sql-api-spark.md) allows you to communicate with Azure Cosmos DB using [Apache Spark](https://spark.apache.org/). This article describes how the throughput control feature works. Check out our [Spark samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples) to get started using throughput control.
+
+## Why is throughput control important?
+
+ Having throughput control helps to isolate the performance needs of applications running against a container, by limiting the amount of [request units](../request-units.md) that can be consumed by a given Spark client.
+
+There are several advanced scenarios that benefit from client-side throughput control:
+
+- **Different operations and tasks have different priorities** - there can be a need to prevent normal transactions from being throttled due to data ingestion or copy activities. Some operations and/or tasks aren't sensitive to latency, and are more tolerant to being throttled than others.
+
+- **Provide fairness/isolation to different end users/tenants** - An application will usually have many end users. Some users may send too many requests, which consume all available throughput, causing others to get throttled.
+
+- **Load balancing of throughput between different Azure Cosmos DB clients** - in some use cases, it's important to make sure all the clients get a fair (equal) share of the throughput
++
+Throughput control enables the capability for more granular level RU rate limiting as needed.
+
+## How does throughput control work?
+
+Throughput control for the Spark Connector is configured by first creating a container that will define throughput control metadata, with a partition key of `groupId`, and `ttl` enabled. Here we create this container using Spark SQL, and call it `ThroughputControl`:
++
+```sql
+ %sql
+ CREATE TABLE IF NOT EXISTS cosmosCatalog.`database-v4`.ThroughputControl
+ USING cosmos.oltp
+ OPTIONS(spark.cosmos.database = 'database-v4')
+ TBLPROPERTIES(partitionKeyPath = '/groupId', autoScaleMaxThroughput = '4000', indexingPolicy = 'AllProperties', defaultTtlInSeconds = '-1');
+```
+
+> [!NOTE]
+> The above example creates a container with [autoscale](../provision-throughput-autoscale.md). If you prefer standard provisioning, you can replace `autoScaleMaxThroughput` with `manualThroughput` instead.
+
+> [!IMPORTANT]
+> The partition key must be defined as `/groupId`, and `ttl` must be enabled, for the throughput control feature to work.
+
+Within the Spark config of a given application, we can then specify parameters for our workload. The below example sets throughput control as `enabled`, as well as defining a throughput control group `name` and a `targetThroughputThreshold`. We also define the `database` and `container` in which through control group is maintained:
+
+```scala
+ "spark.cosmos.throughputControl.enabled" -> "true",
+ "spark.cosmos.throughputControl.name" -> "SourceContainerThroughputControl",
+ "spark.cosmos.throughputControl.targetThroughputThreshold" -> "0.95",
+ "spark.cosmos.throughputControl.globalControl.database" -> "database-v4",
+ "spark.cosmos.throughputControl.globalControl.container" -> "ThroughputControl"
+```
+
+In the above example, the `targetThroughputThreshold` is defined as **0.95**, so rate limiting will occur (and requests will be retried) when clients consume more than 95% (+/- 5-10 percent) of the throughput that is allocated to the container. This configuration is stored as a document in the throughput container that looks like the below:
+
+```json
+ {
+ "id": "ZGF0YWJhc2UtdjQvY3VzdG9tZXIvU291cmNlQ29udGFpbmVyVGhyb3VnaHB1dENvbnRyb2w.info",
+ "groupId": "database-v4/customer/SourceContainerThroughputControl.config",
+ "targetThroughput": "",
+ "targetThroughputThreshold": "0.95",
+ "isDefault": true,
+ "_rid": "EHcYAPolTiABAAAAAAAAAA==",
+ "_self": "dbs/EHcYAA==/colls/EHcYAPolTiA=/docs/EHcYAPolTiABAAAAAAAAAA==/",
+ "_etag": "\"2101ea83-0000-1100-0000-627503dd0000\"",
+ "_attachments": "attachments/",
+ "_ts": 1651835869
+ }
+```
+> [!NOTE]
+> Throughput control does not do RU pre-calculation of each operation. Instead, it tracks the RU usages after the operation based on the response header. As such, throughput control is based on an approximation - and does not guarantee that amount of throughput will be available for the group at any given time.
+
+> [!WARNING]
+> The `targetThroughputThreshold` is **immutable**. If you change the target throughput threshold value, this will create a new throughput control group (but as long as you use Version 4.10.0 or later it can have the same name). You need to restart all Spark jobs that are using the group if you want to ensure they all consume the new threshold immediately (otherwise they will pick-up the new threshold after the next restart).
+
+For each Spark client that uses the throughput control group, a record will be created in the `ThroughputControl` container - with a ttl of a few seconds - so the documents will vanish pretty quickly if a Spark client isn't actively running anymore - which looks like the below:
+
+```json
+ {
+ "id": "Zhjdieidjojdook3osk3okso3ksp3ospojsp92939j3299p3oj93pjp93jsps939pkp9ks39kp9339skp",
+ "groupId": "database-v4/customer/SourceContainerThroughputControl.config",
+ "_etag": "\"1782728-w98999w-ww9998w9-99990000\"",
+ "ttl": 10,
+ "initializeTime": "2022-06-26T02:24:40.054Z",
+ "loadFactor": 0.97636377638898,
+ "allocatedThroughput": 484.89444487847,
+ "_rid": "EHcYAPolTiABAAAAAAAAAA==",
+ "_self": "dbs/EHcYAA==/colls/EHcYAPolTiA=/docs/EHcYAPolTiABAAAAAAAAAA==/",
+ "_etag": "\"2101ea83-0000-1100-0000-627503dd0000\"",
+ "_attachments": "attachments/",
+ "_ts": 1651835869
+ }
+```
+
+In each client record, the `loadFactor` attribute represents the load on the given client, relative to other clients in the throughput control group. The `allocatedThroughput` attribute shows how many RUs are currently allocated to this client. The Spark Connector will adjust allocated throughput for each client based on its load. This will ensure that each client gets a share of the throughput available that is proportional to its load, and all clients together don't consume more than the total allocated for the throughput control group to which they belong.
++
+## Next steps
+
+* [Spark samples in GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples).
+* [Manage data with Azure Cosmos DB Spark 3 OLTP Connector for SQL API](create-sql-api-spark.md).
+* Learn more about [Apache Spark](https://spark.apache.org/).
cost-management-billing Enable Preview Features Cost Management Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-preview-features-cost-management-labs.md
+
+ Title: Enable preview features in Cost Management Labs
+
+description: This article explains how to explore preview features and provides a list of the recent previews you might be interested in.
++ Last updated : 06/23/2022++++++
+# Enable preview features in Cost Management Labs
+
+Cost Management Labs is an experience in the Azure portal where you can get a sneak peek at what's coming in Cost Management. You can engage directly with us to share feedback and help us better understand how you use the service, so we can deliver more tuned and optimized experiences.
+
+This article explains how to explore preview features and provides a list of the recent previews you might be interested in.
+
+## Explore preview features
+
+You can explore preview features from the Cost Management overview.
+
+1. On the Cost Management overview page, select the [Try preview](https://aka.ms/costmgmt/trypreview) command at the top of the page.
+2. From there, enable the features you'd like to use and select **Close** at the bottom of the page.
+ :::image type="content" source="./media/enable-preview-features-cost-management-labs/cost-management-labs.png" alt-text="Screenshot showing the Cost Management labs preview options." lightbox="./media/enable-preview-features-cost-management-labs/cost-management-labs.png" :::
+3. To see the features enabled, close and reopen Cost Management. You can reopen Cost Management by selecting the link in the notification in the top-right corner.
+ :::image type="content" source="./media/enable-preview-features-cost-management-labs/reopen-cost-management.png" alt-text="Screenshot showing the Reopen Cost Management notification." :::
+
+If you're interested in getting preview features even earlier:
+
+1. Navigate to Cost Management.
+2. Select **Go to preview portal**.
+
+Or, you can go directly to the [Azure preview portal](https://preview.portal.azure.com/).
+
+It's the same experience as the public portal, except with new improvements and preview features. Every change in Cost Management is available in the preview portal a week before it's in the full Azure portal.
+
+We encourage you to try out the preview features available in Cost Management Labs and share your feedback. It's your chance to influence the future direction of Cost Management. To provide feedback, use the **Report a bug** link in the Try preview menu. It's a direct way to communicate with the Cost Management engineering team.
+
+## Anomaly detection alerts
+
+<a name="anomalyalerts"></a>
+
+Get notified by email when a cost anomaly is detected on your subscription.
+
+Anomaly detection is available for Azure global subscriptions in the cost analysis preview.
+
+Here's an example of a cost anomaly shown in cost analysis:
+++
+To configure anomaly alerts:
+
+1. Open the cost analysis preview.
+1. Navigate to **Cost alerts** and select **Add** > **Add Anomaly alert**.
++
+For more information about anomaly detection and how to configure alerts, see [Identify anomalies and unexpected changes in cost](../understand/analyze-unexpected-charges.md).
+
+**Anomaly detection is now available by default in Azure global.**
+
+## Grouping SQL databases and elastic pools
+
+<a name="aksnestedtable"></a>
+
+Get an at-a-glance view of your total SQL costs by grouping SQL databases and elastic pools. They're shown under their parent server in the cost analysis preview. This feature is enabled by default.
+
+Understanding what you're being charged for can be complicated. The best place to start for many people is the [Resources view](https://aka.ms/costanalysis/resources) in the cost analysis preview. It shows resources that are incurring cost. But even a straightforward list of resources can be hard to follow when a single deployment includes multiple, related resources. To help summarize your resource costs, we're trying to group related resources together. So, we're changing cost analysis to show child resources.
+
+Many Azure services use nested or child resources. SQL servers have databases, storage accounts have containers, and virtual networks have subnets. Most of the child resources are only used to configure services, but sometimes the resources have their own usage and charges. SQL databases are perhaps the most common example.
+
+SQL databases are deployed as part of a SQL server instance, but usage is tracked at the database level. Additionally, you might also have charges on the parent server, like for Microsoft Defender for Cloud. To get the total cost for your SQL deployment in classic cost analysis, you need to find the server and each database and then manually sum up their total cost. As an example, you can see the **aepool** elastic pool at the top of the list below and the **treyanalyticsengine** server lower down on the first page. What you don't see is another database even lower in the list. You can imagine how troubling this situation would be when you need the total cost of a large server instance with many databases.
+
+Here's an example showing classic cost analysis where multiple related resource costs aren't grouped.
++
+In the cost analysis preview, the child resources are grouped together under their parent resource. The grouping shows a quick, at-a-glance view of your deployment and its total cost. Using the same subscription, you can now see all three charges grouped together under the server, offering a one-line summary for your total server costs.
+
+Here's an example showing grouped resource costs with the **Grouping SQL databases and elastic pools** preview option enabled.
++
+You might also notice the change in row count. Classic cost analysis shows 53 rows where every resource is broken out on its own. The cost analysis preview only shows 25 rows. The difference is that the individual resources are being grouped together, making it easier to get an at-a-glance cost summary.
+
+In addition to SQL servers, you'll also see other services with child resources, like App Service, Synapse, and VNet gateways. Each is similarly shown grouped together in the cost analysis preview.
+
+**Grouping SQL databases and elastic pools is available by default in the cost analysis preview.**
+
+## Average in the cost analysis preview
+
+<a name="cav3average"></a>
+
+Average in the cost analysis preview shows your average daily or monthly cost at the top of the view.
++
+When the selected date range includes the current day, the average cost is calculated ending at yesterday's date. It doesn't include partial cost from the current day because data for the day isn't complete. Every service submits usage at different timelines that affects the average calculation. For more information about data latency and refresh processing, see [Understand Cost Management data](understand-cost-mgt-data.md).
+
+**Average in the cost analysis preview is available by default in the cost analysis preview.**
+
+## Budgets in the cost analysis preview
+
+<a name="budgetsfeature"></a>
+
+Budgets in the cost analysis preview help you quickly create and edit budgets directly from the cost analysis preview.
++
+If you don't have a budget yet, you'll see a link to create a new budget. Budgets created from the cost analysis preview are preconfigured with alerts. Thresholds are set for cost exceeding 50 percent, 80 percent, and 95 percent of your cost. Or, 100 percent of your forecast for the month. You can add other recipients or update alerts from the Budgets page.
+
+**Budgets in the cost analysis preview is available by default in the cost analysis preview.**
+
+## Charts in the cost analysis preview
+
+<a name="chartsfeature"></a>
+
+Charts in the cost analysis preview include a chart of daily or monthly charges for the specified date range.
++
+Charts are enabled on the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Use the **How would you rate the cost analysis preview?** Option at the bottom of the page to share feedback about the preview.
+
+## Streamlined menu
+
+<a name="onlyinconfig"></a>
+
+Cost Management includes a central management screen for all configuration settings. Some of the settings are also available directly from the Cost Management menu currently. Enabling the **Streamlined menu** option removes configuration settings from the menu.
+
+In the following image, the menu on the left is classic cost analysis. The menu on the right is the streamlined menu.
++
+You can enable **Streamlined menu** on the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Feel free to [share your feedback](https://feedback.azure.com/d365community/idea/5e0ea52c-1025-ec11-b6e6-000d3a4f07b8). As an experimental feature, we need your feedback to determine whether to release or remove the preview.
+
+## Open config items in the menu
+
+<a name="configinmenu"></a>
+
+Cost Management includes a central management view for all configuration settings. Currently, selecting a setting opens the configuration page outside of the Cost Management menu.
++
+**Open config items in the menu** is an experimental option to open the configuration page in the Cost Management menu. The option makes it easier to switch to other menu items with one selection. The feature works best with the [streamlined menu](#streamlined-menu).
+
+You can enable **Open config items in the menu** on the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal.
+
+[Share your feedback](https://feedback.azure.com/d365community/idea/1403a826-1025-ec11-b6e6-000d3a4f07b8) about the feature. As an experimental feature, we need your feedback to determine whether to release or remove the preview.
+
+## Change scope from menu
+
+<a name="changescope"></a>
+
+If you manage many subscriptions and need to switch between subscriptions or resource groups often, you might want to include the **Change scope from menu** option.
++
+It allows changing the scope from the menu for quicker navigation. To enable the feature, navigate to the [Cost Management Labs preview page](https://portal.azure.com/#view/Microsoft_Azure_CostManagement/Menu/~/overview/open/overview.preview) in the Azure portal.
+
+[Share your feedback](https://feedback.azure.com/d365community/idea/e702a826-1025-ec11-b6e6-000d3a4f07b8) about the feature. As an experimental feature, we need your feedback to determine whether to release or remove the preview.
+
+## How to share feedback
+
+We're always listening and making constant improvements based on your feedback, so we welcome it. Here are a few ways to share your feedback with the team:
+
+- If you have a problem or are seeing data that doesn't make sense, submit a support request. It's the fastest way to investigate and resolve data issues and major bugs.
+- For feature requests, you can share ideas and vote up others in the [Cost Management feedback forum](https://aka.ms/costmgmt/feedback).
+- Take advantage of the **How would you rate…** prompts in the Azure portal to let us know how each experience is working for you. We monitor the feedback proactively to identify and prioritize changes. You'll see either a blue option in the bottom-right corner of the page or a banner at the top.
+
+## Next steps
+
+Learn about [what's new in Cost Management](https://azure.microsoft.com/blog/tag/cost-management/).
cost-management-billing Open Banking Strong Customer Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/open-banking-strong-customer-authentication.md
As of September 14, 2019, banks in the 31 countries/regions of the [European Eco
## What PSD2 means for Azure customers
-If you pay for Azure with a credit card issued by a bank in the[European Economic Area](https://en.wikipedia.org/wiki/European_Economic_Area), you might be required to complete multi-factor authentication for the payment method of your account. You may be prompted to complete the multi-factor authentication challenge when signing up your Azure account or upgrading your Azure accountΓÇöeven if you are not making a purchase at the time. You may also be asked to provide multi-factor authentication when you change the payment method of your Azure account, remove your spending cap, or make an immediate payment from the Azure portalΓÇö such as settling outstanding balances or purchasing Azure credits.
+If you pay for Azure with a credit card issued by a bank in the [European Economic Area](https://en.wikipedia.org/wiki/European_Economic_Area), you might be required to complete multi-factor authentication for the payment method of your account. You may be prompted to complete the multi-factor authentication challenge when signing up your Azure account or upgrading your Azure accountΓÇöeven if you are not making a purchase at the time. You may also be asked to provide multi-factor authentication when you change the payment method of your Azure account, remove your spending cap, or make an immediate payment from the Azure portalΓÇö such as settling outstanding balances or purchasing Azure credits.
If your bank rejects your monthly Azure charges, you'll get a past due email from Azure with instructions to fix it. You can complete the multi-factor authentication challenge and settle your outstanding charges in the Azure portal.
cost-management-billing Determine Reservation Purchase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/determine-reservation-purchase.md
Previously updated : 09/20/2021 Last updated : 06/23/2022
Use the following sections to help analyze your daily usage data to determine yo
### Analyze usage for a VM reserved instance purchase
-Identify the right VM size for your purchase. For example, a reservation purchased for ES series VMs don't apply to E series VMs, and vice-versa.
+Identify the right VM size for your purchase. For example, a reservation purchased for ES series VMs doesn't apply to E series VMs, and vice-versa.
Promo series VMs don't get a reservation discount, so remove them from your analysis.
If you want to analyze at the instance size family level, you can get the instan
Reserved capacity applies to Azure Synapse Analytics DWU pricing. It doesn't apply to Azure Synapse Analytics license cost or any costs other than compute.
-To narrow eligible usage, apply follow filters on your usage data:
-
+To narrow eligible usage, apply the following filters to your usage data:
- Filter **MeterCategory** for **SQL Database**. - Filter **MeterName** for **vCore**.
The data informs you about the consistent usage for:
### Analysis for Azure Synapse Analytics
-Reserved capacity applies to Azure Synapse Analytics DWU usage and is purchased in increments on 100 DWU. To narrow eligible usage, apply the follow filters on your usage data:
+Reserved capacity applies to Azure Synapse Analytics DWU usage and is purchased in increments on 100 DWU. To narrow eligible usage, apply the following filters on your usage data:
- Filter **MeterName** for **100 DWUs**. - Filter **Meter Sub-Category** for **Compute Optimized Gen2**.
Learn more about [recommendations](reserved-instance-purchase-recommendations.md
## Recommendations in the Cost Management Power BI app
-Enterprise Agreement customers can use the VM RI Coverage reports for VMs and purchase recommendations. The coverage reports show you total usage and the usage that's covered by reserved instances.
+Enterprise Agreement customers can use the VM RI Coverage reports for VMs and purchase recommendations. The coverage reports show total usage and the usage that's covered by reserved instances.
1. Get the [Cost Management App](https://appsource.microsoft.com/product/power-bi/costmanagement.azurecostmanagementapp). 2. Go to the VM RI Coverage report ΓÇô Shared or Single scope, depending on which scope you want to purchase at.
Enterprise Agreement customers can use the VM RI Coverage reports for VMs and pu
Reservation purchase recommendations are available in [Azure Advisor](https://portal.azure.com/#blade/Microsoft_Azure_Expert/AdvisorMenuBlade/overview). - Advisor has only single-subscription scope recommendations.-- Advisor recommendations are calculated using 30-day look-back period. The projected savings are for a 3-year reservation term.
+- Advisor recommendations are calculated using 30-day look-back period. The projected savings are for a three-year reservation term.
- If you purchase a shared-scope reservation, Advisor reservation purchase recommendations can take up to 30 days to disappear. ## Recommendations using APIs
data-factory Author Global Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/author-global-parameters.md
description: Set global parameters for each of your Azure Data Factory environme
--++ Last updated 01/31/2022
Global parameters can be used in any [pipeline expression](control-flow-expressi
## <a name="cicd"></a> Global parameters in CI/CD
-There are two ways to integrate global parameters in your continuous integration and deployment solution:
+We recommend including global parameters in the ARM template during the CI/CD. The new mechanism of including global parameters in the ARM template (from 'Manage hub' -> 'ARM template' -> ΓÇÿInclude global parameters in ARM template
+') as illustrated below, will not conflict/ override the factory-level settings as it used to do earlier, hence not requiring additional PowerShell for global parameters deployment during CI/CD.
-* Include global parameters in the ARM template
-* Deploy global parameters via a PowerShell script
+
+> [!NOTE]
+> We have moved the UI experience for including global parameters from the 'Global parameters' section to the 'ARM template' section in the manage hub.
+If you are already using the older mechanism (from 'Manage hub' -> 'Global parameters' -> 'Include in ARM template'), you can continue. We will continue to support it.
+
+If you are using the older flow of integrating global parameters in your continuous integration and deployment solution, it will continue to work:
-For general use cases, it is recommended to include global parameters in the ARM template. This integrates natively with the solution outlined in [the CI/CD doc](continuous-integration-delivery.md). In case of automatic publishing and Microsoft Purview connection, **PowerShell script** method is required. You can find more about PowerShell script method later. Global parameters will be added as an ARM template parameter by default as they often change from environment to environment. You can enable the inclusion of global parameters in the ARM template from the **Manage** hub.
+* Include global parameters in the ARM template (from 'Manage hub' -> 'Global parameters' -> 'Include in ARM template')
+* Deploy global parameters via a PowerShell script
+
+We strongly recommend using the new mechanism of including global parameters in the ARM template (from 'Manage hub' -> 'ARM template' -> 'Include global parameters in an ARM template') since it makes the CICD with global parameters much more straightforward and easier to manage.
> [!NOTE]
-> The **Include in ARM template** configuration is only available in "Git mode". Currently it is disabled in "live mode" or "Data Factory" mode. In case of automatic publishing or Microsoft Purview connection, do not use Include global parameters method; use PowerShell script method.
+> The **Include global parameters in an ARM template** configuration is only available in "Git mode". Currently it is disabled in "live mode" or "Data Factory" mode.
> [!WARNING] >You cannot use ΓÇÿ-ΓÇÿ in the parameter name. You will receive an errorcode "{"code":"BadRequest","message":"ErrorCode=InvalidTemplate,ErrorMessage=The expression >'pipeline().globalParameters.myparam-dbtest-url' is not valid: .....}". But, you can use the ΓÇÿ_ΓÇÖ in the parameter name.
-Adding global parameters to the ARM template adds a factory-level setting that will override other factory-level settings such as a customer-managed key or git configuration in other environments. If you have these settings enabled in an elevated environment such as UAT or PROD, it's better to deploy global parameters via a PowerShell script in the steps highlighted below.
-### Deploying using PowerShell
+### Deploying using PowerShell (older mechanism)
+
+> [!NOTE]
+> This is not required if you're including global parameters using the 'Manage hub' -> 'ARM template' -> 'Include global parameters in an ARM template' since you can deploy the ARM with the ARM templates without breaking the Factory-level configurations. For backward compatability we will continue to support it.
The following steps outline how to deploy global parameters via PowerShell. This is useful when your target factory has a factory-level setting such as customer-managed key.
foreach ($gp in $globalParametersObject.GetEnumerator()) {
Write-Host "Adding global parameter:" $gp.Key $globalParameterValue = $gp.Value.ToObject([Microsoft.Azure.Management.DataFactory.Models.GlobalParameterSpecification]) $newGlobalParameters.Add($gp.Key, $globalParameterValue)
-}
+}
$dataFactory = Get-AzDataFactoryV2 -ResourceGroupName $resourceGroupName -Name $dataFactoryName $dataFactory.GlobalParameters = $newGlobalParameters
data-factory Concepts Integration Runtime Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-integration-runtime-performance.md
By default, every data flow activity spins up a new Spark cluster based upon the
However, if most of your data flows execute in parallel, it is not recommended that you enable TTL for the IR that you use for those activities. Only one job can run on a single cluster at a time. If there is an available cluster, but two data flows start, only one will use the live cluster. The second job will spin up its own isolated cluster. > [!NOTE]
-> Time to live is not available when using the auto-resolve integration runtime
+> Time to live is not available when using the auto-resolve integration runtime (default).
## Next steps
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Title: Managing Azure Data Factory Studio preview updates
-description: Learn how to enable/disable Azure Data Factory studio preview updates.
+description: Learn more about the Azure Data Factory studio preview experience.
Previously updated : 06/21/2022 Last updated : 06/23/2022 # Manage Azure Data Factory studio preview experience
There are two ways to enable preview experiences.
## Current Preview Updates
-### Dataflow Data first experimental view
+ [**Dataflow data first experimental view**](#dataflow-data-first-experimental-view)
+ * [Configuration panel](#configuration-panel)
+ * [Transformation settings](#transformation-settings)
+ * [Data preview](#data-preview)
+
+ [**Pipeline experimental view**](#pipeline-experimental-view)
+ * [Adding activities](#adding-activities)
+ * [ForEach activity container](#foreach-activity-container)
+
+### Dataflow data first experimental view
UI (user interfaces) changes have been made to mapping data flows. These changes were made to simplify and streamline the dataflow creation process so that you can focus on what your data looks like. The dataflow authoring experience remains the same as detailed [here](https://aka.ms/adfdataflows), except for certain areas detailed below.
Now, for each transformation, the configuration panel will only have **Data Prev
:::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-6.png" alt-text="Screenshot of the configuration panel with only a Data preview tab."::: If no transformation is selected, the panel will show the pre-existing data flow configurations: **Parameters** and **Settings**.
-
-
+ #### Transformation settings Settings specific to a transformation will now show in a pop up instead of the configuration panel. With each new transformation, a corresponding pop-up will automatically appear.
If debug mode is on, **Data Preview** in the configuration panel will give you a
Columns can be rearranged by dragging a column by its header. You can also sort columns using the arrows next to the column titles and you can export data preview data using **Export to CSV** on the banner above column headers. :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-9.png" alt-text="Screenshot of Data preview with Export button in the top right corner of the banner and Elapsed Time highlighted in the bottom left corner of the screen.":::
-
+ ### Pipeline experimental view UI (user interface) changes have been made to activities in the pipeline editor canvas. These changes were made to simplify and streamline the pipeline creation process. + #### Adding activities You now have the option to add an activity using the add button in the bottom right corner of an activity in the pipeline editor canvas. Clicking the button will open a drop-down list of all activities that you can add.
You now have the option to add an activity using the add button in the bottom ri
Select an activity by using the search box or scrolling through the listed activities. The selected activity will be added to the canvas and automatically linked with the previous activity on success. :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-10.png" alt-text="Screenshot of new pipeline activity adding experience with a drop down list to select activities.":::
-
+ #### ForEach activity container You can now view the activities contained in your ForEach activity.
You can now view the activities contained in your ForEach activity.
:::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-11.png" alt-text="Screenshot of new ForEach activity container."::: You have two options to add activities to your ForEach loop.+ 1. Use the + button in your ForEach container to add an activity. :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-12.png" alt-text="Screenshot of new ForEach activity container with the add button highlighted on the left side of the center of the screen.":::
data-factory Data Factory Azure Blob Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-azure-blob-connector.md
Whether you use the tools or APIs, you perform the following steps to create a p
2. Create **datasets** to represent input and output data for the copy operation. In the example mentioned in the last step, you create a dataset to specify the blob container and folder that contains the input data. And, you create another dataset to specify the SQL table in Azure SQL Database that holds the data copied from the blob storage. For dataset properties that are specific to Azure Blob Storage, see [dataset properties](#dataset-properties) section. 3. Create a **pipeline** with a copy activity that takes a dataset as an input and a dataset as an output. In the example mentioned earlier, you use BlobSource as a source and SqlSink as a sink for the copy activity. Similarly, if you are copying from Azure SQL Database to Azure Blob Storage, you use SqlSource and BlobSink in the copy activity. For copy activity properties that are specific to Azure Blob Storage, see [copy activity properties](#copy-activity-properties) section. For details on how to use a data store as a source or a sink, click the link in the previous section for your data store.
-When you use the wizard, JSON definitions for these Data Factory entities (linked services, datasets, and the pipeline) are automatically created for you. When you use tools/APIs (except .NET API), you define these Data Factory entities by using the JSON format. For samples with JSON definitions for Data Factory entities that are used to copy data to/from an Azure Blob Storage, see [JSON examples](#json-examples-for-copying-data-to-and-from-blob-storage ) section of this article.
+When you use the wizard, JSON definitions for these Data Factory entities (linked services, datasets, and the pipeline) are automatically created for you. When you use tools/APIs (except .NET API), you define these Data Factory entities by using the JSON format. For samples with JSON definitions for Data Factory entities that are used to copy data to/from an Azure Blob Storage, see [JSON examples](#json-examples-for-copying-data-to-and-from-blob-storage) section of this article.
The following sections provide details about JSON properties that are used to define Data Factory entities specific to Azure Blob Storage.
data-factory Data Factory Data Movement Security Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-data-movement-security-considerations.md
The following cloud data stores require approving of IP address of the gateway m
**Answer:** We do not support this feature yet. We are actively working on it. **Question:** What are the port requirements for the gateway to work?
-**Answer:** Gateway makes HTTP-based connections to open internet. The **outbound ports 443 and 80** must be opened for gateway to make this connection. Open **Inbound Port 8050** only at the machine level (not at corporate firewall level) for Credential Manager application. If Azure SQL Database or Azure Synapse Analytics is used as source/ destination, then you need to open **1433** port as well. For more information, see [Firewall configurations and filtering IP addresses](#firewall-configurations-and-filtering-ip-address-of gateway) section.
+**Answer:** Gateway makes HTTP-based connections to open internet. The **outbound ports 443 and 80** must be opened for gateway to make this connection. Open **Inbound Port 8050** only at the machine level (not at corporate firewall level) for Credential Manager application. If Azure SQL Database or Azure Synapse Analytics is used as source/ destination, then you need to open **1433** port as well. For more information, see [Firewall configurations and filtering IP addresses](#firewall-configurations-and-filtering-ip-address-of-gateway) section.
**Question:** What are certificate requirements for Gateway? **Answer:** Current gateway requires a certificate that is used by the credential manager application for securely setting data store credentials. This certificate is a self-signed certificate created and configured by the gateway setup. You can use your own TLS/SSL certificate instead. For more information, see [click-once credential manager application](#click-once-credentials-manager-app) section.
databox-online Azure Stack Edge Mini R System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-system-requirements.md
To understand and refine the performance of your solution, you could use:
- `dkr image [prune]` to clean up unused images and free up space. - `dkr ps --size` to view the approximate size of a running container.
- For more information on the available commands, go to [ Debug Kubernetes issues](azure-stack-edge-gpu-connect-powershell-interface.md#debug-kubernetes-issues-related-to-iot-edge).
+ For more information on the available commands, go to [Debug Kubernetes issues](azure-stack-edge-gpu-connect-powershell-interface.md#debug-kubernetes-issues-related-to-iot-edge).
Finally, make sure that you validate your solution on your dataset and quantify the performance on Azure Stack Edge Mini R before deploying in production.
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
The description has also been updated to better explain the purpose of this hard
| Recommendation | Description | Severity | |--|--|:--:|
-| **Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources** | By default, a virtual machineΓÇÖs OS and data disks are encrypted-at-rest using platform-managed keys; temp disks and data caches arenΓÇÖt encrypted, and data isnΓÇÖt encrypted when flowing between compute and storage resources. For a comparison of different disk encryption technologies in Azure, see <https://aka.ms/diskencryptioncomparison>.<br>Use Azure Disk Encryption to encrypt all this data. Disregard this recommendation if: (1) youΓÇÖre using the encryption-at-host feature, or (2) server-side encryption on Managed Disks meets your security requirements. Learn more in Server-side encryption of Azure Disk Storage. | High |
+| **Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources** | By default, a virtual machineΓÇÖs OS and data disks are encrypted-at-rest using platform-managed keys; temp disks and data caches arenΓÇÖt encrypted, and data isnΓÇÖt encrypted when flowing between compute and storage resources. For more information, see the [comparison of different disk encryption technologies in Azure](https://aka.ms/diskencryptioncomparison).<br>Use Azure Disk Encryption to encrypt all this data. Disregard this recommendation if: (1) youΓÇÖre using the encryption-at-host feature, or (2) server-side encryption on Managed Disks meets your security requirements. Learn more in Server-side encryption of Azure Disk Storage. | High |
### Continuous export of secure score and regulatory compliance data released for general availability (GA)
To access this information, you can use any of the methods in the table below.
| Tool | Details | |-||
-| REST API call | GET <https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/providers/Microsoft.Security/assessments?api-version=2019-01-01-preview&$expand=statusEvaluationDates> |
+| REST API call | `GET https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/providers/Microsoft.Security/assessments?api-version=2019-01-01-preview&$expand=statusEvaluationDates` |
| Azure Resource Graph | `securityresources`<br>`where type == "microsoft.security/assessments"` | | Continuous export | The two dedicated fields will be available the Log Analytics workspace data | | [CSV export](continuous-export.md#manual-one-time-export-of-alerts-and-recommendations) | The two fields are included in the CSV files |
To ensure that Kubernetes workloads are secure by default, Security Center is ad
The early phase of this project includes a private preview and the addition of new (disabled by default) policies to the ASC_default initiative.
-You can safely ignore these policies and there will be no impact on your environment. If you'd like to enable them, sign up for the preview at <https://aka.ms/SecurityPrP> and select from the following options:
+You can safely ignore these policies and there will be no impact on your environment. If you'd like to enable them, sign up for the preview via the [Microsoft Cloud Security
+Private Community](https://aka.ms/SecurityPrP) and select from the following options:
1. **Single Preview** ΓÇô To join only this private preview. Explicitly mention "ASC Continuous Scan" as the preview you would like to join. 1. **Ongoing Program** ΓÇô To be added to this and future private previews. You'll need to complete a profile and privacy agreement.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 06/20/2022 Last updated : 06/23/2022 # What's new in Microsoft Defender for Cloud?
Learn how to [enable protections](enable-enhanced-security.md) for your database
### Auto-provisioning of Microsoft Defender for Endpoint unified solution
-Until now, the integration with Microsoft Defender for Endpoint (MDE) included automatic installation of the new [MDE unified solution](/microsoft-365/security/defender-endpoint/configure-server-endpoints?view=o365-worldwide#new-windows-server-2012-r2-and-2016-functionality-in-the-modern-unified-solution) for machines (Azure subscriptions and multicloud connectors) with Defender for Servers Plan 1 enabled, and for multicloud connectors with Defender for Servers Plan 2 enabled. Plan 2 for Azure subscriptions enabled the unified solution for Linux machines and Windows 2019 and 2022 servers only. Windows servers 2012R2 and 2016 used the MDE legacy solution dependent on Log Analytics agent.
+Until now, the integration with Microsoft Defender for Endpoint (MDE) included automatic installation of the new [MDE unified solution](/microsoft-365/security/defender-endpoint/configure-server-endpoints?view=o365-worldwide#new-windows-server-2012-r2-and-2016-functionality-in-the-modern-unified-solution&preserve-view=true) for machines (Azure subscriptions and multicloud connectors) with Defender for Servers Plan 1 enabled, and for multicloud connectors with Defender for Servers Plan 2 enabled. Plan 2 for Azure subscriptions enabled the unified solution for Linux machines and Windows 2019 and 2022 servers only. Windows servers 2012R2 and 2016 used the MDE legacy solution dependent on Log Analytics agent.
Now, the new unified solution is available for all machines in both plans, for both Azure subscriptions and multi-cloud connectors. For Azure subscriptions with Servers plan 2 that enabled MDE integration *after* 06-20-2022, the unified solution is enabled by default for all machines Azure subscriptions with the Defender for Servers Plan 2 enabled with MDE integration *before* 06-20-2022 can now enable unified solution installation for Windows servers 2012R2 and 2016 through the dedicated button in the Integrations page: :::image type="content" source="media/integration-defender-for-endpoint/enable-unified-solution.png" alt-text="The integration between Microsoft Defender for Cloud and Microsoft's EDR solution, Microsoft Defender for Endpoint, is enabled." lightbox="media/integration-defender-for-endpoint/enable-unified-solution.png":::
-Learn more about [MDE integration with Defender for Servers.](integration-defender-for-endpoint.md#users-with-defender-for-servers-enabled-and-microsoft-defender-for-endpoint-deployed).
+Learn more about [MDE integration with Defender for Servers](integration-defender-for-endpoint.md#users-with-defender-for-servers-enabled-and-microsoft-defender-for-endpoint-deployed).
### Deprecating the "API App should only be accessible over HTTPS" policy The policy `API App should only be accessible over HTTPS` has been deprecated. This policy is replaced with the `Web Application should only be accessible over HTTPS` policy, which has been renamed to `App Service apps should only be accessible over HTTPS`.
-To learn more about policy definitions for Azure App Service, see [Azure Policy built-in definitions for Azure App Service](../azure-app-configuration/policy-reference.md)
+To learn more about policy definitions for Azure App Service, see [Azure Policy built-in definitions for Azure App Service](../azure-app-configuration/policy-reference.md).
## May 2022
defender-for-iot How To Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-agent-configuration.md
To use a default property value, remove the property from the configuration obje
The following table contains the controllable properties of Defender for IoT security agents.
-Default values are available in the proper schema in [GitHub](https\://aka.ms/iot-security-module-default).
+Default values are available in the proper schema in [GitHub](https://aka.ms/iot-security-module-default).
| Name| Status | Valid values| Default values| Description | |-|--|--|-|-|
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
To send notifications:
For more information about forwarding rules, see [Forward alert information](how-to-forward-alert-information-to-partners.md). +
+## Upload and play PCAP files
+
+When troubleshooting, you may want to examine data recorded by a specific PCAP file. To do so, you can upload a PCAP file to your sensor console and replay the data recorded.
+
+To view the PCAP player in your sensor console, you'll first need to configure the relevant advanced configuration option.
+
+Maximum size for uploaded files is 2 GB.
+
+**To show the PCAP player in your sensor console**:
+
+1. On your sensor console, go to **System settings > Sensor management > Advanced Configurations**.
+
+1. In the **Advanced configurations** pane, select the **Pcaps** category.
+
+1. In the configurations displayed, change `enabled=0` to `enabled=1`, and select **Save**.
+
+The **Play PCAP** option is now available in the sensor console's settings, under: **System settings > Basic > Play PCAP**.
+
+**To upload and play a PCAP file**:
+
+1. On your sensor console, select **System settings > Basic > Play PCAP**.
+
+1. In the **PCAP PLAYER** pane, select **Upload** and then navigate to and select the file you want to upload.
+
+1. Select **Play** to play your PCAP file, or **Play All** to play all PCAP files currently loaded.
+
+> [!TIP]
+> Select **Clear All** to clear the sensor of all PCAP files loaded.
+ ## Adjust system properties System properties control various operations and settings in the sensor. Editing or modifying them might damage the operation of the sensor console.
To access system properties:
3. Select **System Properties** from the **General** section. + ## Next steps For more information, see:
defender-for-iot How To Work With Alerts On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-alerts-on-premises-management-console.md
Export alert information to a .csv file. You can export information of all alert
1. Select **Save**.
-You can learn more [About forwarded alert information](how-to-forward-alert-information-to-partners.md#about-forwarded-alert-information). You can also [Test forwarding rules](how-to-forward-alert-information-to-partners.md#test-forwarding-rules), or [Edit and delete forwarding rules](how-to-forward-alert-information-to-partners.md#edit-and-delete-forwarding-rules). You can also learn more about[Forwarding rules and alert exclusion rules](how-to-forward-alert-information-to-partners.md#forwarding-rules-and-alert-exclusion-rules).
+You can learn more [About forwarded alert information](how-to-forward-alert-information-to-partners.md#about-forwarded-alert-information). You can also [Test forwarding rules](how-to-forward-alert-information-to-partners.md#test-forwarding-rules), or [Edit and delete forwarding rules](how-to-forward-alert-information-to-partners.md#edit-and-delete-forwarding-rules). You can also learn more about [Forwarding rules and alert exclusion rules](how-to-forward-alert-information-to-partners.md#forwarding-rules-and-alert-exclusion-rules).
## Create alert exclusion rules
defender-for-iot Integrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/integrate-overview.md
+
+ Title: Integrations with partner services - Microsoft Defender for IoT
+description: Learn about supported integrations with Microsoft Defender for IoT.
Last updated : 06/21/2022+++
+# Integrations with partner services
+
+Integrate Microsoft Defender for Iot with partner services to view partner data in Defender for IoT, or to view Defender for IoT data in a partner service.
+
+## Supported integrations
+
+The following table lists available integrations for Microsoft Defender for IoT, as well as links for specific configuration information.
++
+|Partner service |Description | Learn more |
+||||
+|**Aruba ClearPass** | Share Defender for IoT data with ClearPass Security Exchange and update the ClearPass Policy Manager Endpoint Database with Defender for IoT data. | [Integrate ClearPass with Microsoft Defender for IoT](tutorial-clearpass.md) |
+|**CyberArk** | Send CyberArk PSM syslog data on remote sessions and verification failures to Defender for IoT for data correlation. | [Integrate CyberArk with Microsoft Defender for IoT](tutorial-cyberark.md) |
+|**Forescout** | Automate actions in Forescout based on activity detected by Defender for IoT, and correlate Defender for IoT data with other *Forescout eyeExtended* modules that oversee monitoring, incident management, and device control. | [Integrate Forescout with Microsoft Defender for IoT](tutorial-forescout.md) |
+|**Fortinet** | Send Defender for IoT data to Fortinet services for: <br><br>- Enhanced network visibility in FortiSIEM<br>- Extra abilities in FortiGate to stop anomalous behavior | [Integrate Fortinet with Microsoft Defender for IoT](tutorial-fortinet.md) |
+|**Palo Alto** |Use Defender for IoT data to block critical threats with Palo Alto firewalls, either with automatic blocking or with blocking recommendations. | [Integrate Palo-Alto with Microsoft Defender for IoT](tutorial-palo-alto.md) |
+|**QRadar** |Forward Defender for IoT alerts to IBM QRadar. | [Integrate Qradar with Microsoft Defender for IoT](tutorial-qradar.md) |
+|**ServiceNow** | View Defender for IoT device detections, attributes, and connections in ServiceNow. | [Integrate ServiceNow with Microsoft Defender for IoT](tutorial-servicenow.md) |
+| **Splunk** | Send Defender for IoT alerts to Splunk | [Integrate Splunk with Microsoft Defender for IoT](tutorial-splunk.md) |
+|**Axonius Cybersecurity Asset Management** | Import and manage device inventory discovered by Defender for IoT in your Axonius instance. | [Axonius documentation](https://docs.axonius.com/docs/azure-defender-for-iot) |
+
+## Next steps
+
+For more information, see:
+
+**Device inventory**:
+
+- [Use the Device inventory in the Azure portal](how-to-manage-device-inventory-for-organizations.md)
+- [Use the Device inventory in the OT sensor](how-to-investigate-sensor-detections-in-a-device-inventory.md)
+- [Use the Device inventory in the on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md)
+
+**Alerts**:
+
+- [View alerts in the Azure portal](how-to-manage-cloud-alerts.md)
+- [View alerts in the OT sensor](how-to-view-alerts.md)
+- [View alerts in the on-premises management console](how-to-work-with-alerts-on-premises-management-console.md)
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see the [Microsoft Security Development Lifecycle practice
| Version | Date released | End support date | |--|--|--|
-| 22.1.5 | 06/2022 | 03/2022 |
+| 22.1.5 | 06/2022 | 03/2023 |
| 22.1.4 | 04/2022 | 12/2022 | | 22.1.3 | 03/2022 | 11/2022 | | 22.1.1 | 02/2022 | 10/2022 |
For more information, see the [Microsoft Security Development Lifecycle practice
| 10.5.3 | 10/2021 | 07/2022 | | 10.5.2 | 10/2021 | 07/2022 |
-## June
+## June 2022
**Sensor software version**: 22.1.5
digital-twins Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/overview.md
# Mandatory fields. Title: What is Azure Digital Twins?
-description: Overview of Azure Digital Twins, what the service comprises, and how it can be used in a wider cloud solution.
+description: Overview of the Azure Digital Twins IoT platform, including its features and value.
Previously updated : 03/24/2022 Last updated : 06/17/2022
*Azure Digital Twins* is a platform as a service (PaaS) offering that enables the creation of twin graphs based on digital models of entire environments, which could be buildings, factories, farms, energy networks, railways, stadiums, and moreΓÇöeven entire cities. These digital models can be used to gain insights that drive better products, optimized operations, reduced costs, and breakthrough customer experiences.
-Azure Digital Twins can be used to design a digital twin architecture that represents actual IoT devices in a wider cloud solution, and which connects to IoT Hub device twins to send and receive live data.
+Azure Digital Twins can be used to design a digital twin architecture that represents actual IoT devices in a wider cloud solution, and which connects to [IoT Hub](../iot-hub/iot-concepts-and-iot-hub.md) device twins to send and receive live data.
> [!NOTE] > IoT Hub **device twins** are different from Azure Digital Twins **digital twins**. While *IoT Hub device twins* are maintained by your IoT hub for each IoT device that you connect to it, *digital twins* in Azure Digital Twins can be representations of anything defined by digital models and instantiated within Azure Digital Twins. Take advantage of your domain expertise on top of Azure Digital Twins to build customized, connected solutions that: * Model any environment, and bring digital twins to life in a scalable and secure manner
-* Connect assets such as IoT devices and existing business systems
-* Use a robust event system to build dynamic business logic and data processing
-* Integrate with Azure data, analytics, and AI services to help you track the past and then predict the future
+* Connect assets such as IoT devices and existing business systems, using a robust event system to build dynamic business logic and data processing
+* Query the live execution environment to extract real-time insights from your twin graph
+* Build connected 3D visualizations of your environment that display business logic and twin data in context
+* Query historized environment data and integrate with other Azure data, analytics, and AI services to better track the past and predict the future
-## Open modeling language
+## Define your business environment
In Azure Digital Twins, you define the digital entities that represent the people, places, and things in your physical environment using custom twin types called [models](concepts-models.md).
-You can think of these model definitions as a specialized vocabulary to describe your business. For a building management solution, for example, you might define models such as Building, Floor, and Elevator. You can then create digital twins based on these models to represent your specific environment.
+You can think of these model definitions as a specialized vocabulary to describe your business. For a building management solution, for example, you might define a model that defines a Building type, a Floor type, and an Elevator type. Models are defined in a JSON-like language called [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md), and they describe types of entities according to their state properties, telemetry events, commands, components, and relationships. You can design your own model sets from scratch, or get started with a pre-existing set of [DTDL industry ontologies](concepts-ontologies.md) based on common vocabulary for your industry.
-*Models* are defined in a JSON-like language called [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md), and they describe twins by their state properties, telemetry events, commands, components, and relationships. Here are some other capabilities of models:
-* Models define semantic *relationships* between your entities so that you can connect your twins into a graph that reflects their interactions. You can think of the models as nouns in a description of your world, and the relationships as verbs.
-* You can specialize twins using model *inheritance*. One model can inherit from another.
-* You can design your own model sets from scratch, or get started with a pre-existing set of [DTDL industry ontologies](concepts-ontologies.md) based on common vocabulary for your industry.
+>[!TIP]
+>DTDL is also used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This compatibility helps you connect your Azure Digital Twins solution with other parts of the Azure ecosystem.
-DTDL is also used for data models throughout other Azure IoT services, including [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) and [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). This compatibility helps you connect your Azure Digital Twins solution with other parts of the Azure ecosystem.
+Once you've defined your data models, use them to create [digital twins](concepts-twins-graph.md) that represent each specific entity in your environment. For example, you might use the Building model definition to create several Building-type twins (Building 1, Building 2, and so on). You can also use the relationships in the model definitions to connect twins to each other, forming a conceptual graph.
-## Live execution environment
+You can view your Azure Digital Twins graph in [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md), which provides an interface to help you build and interact with your graph:
-Digital models in Azure Digital Twins are live, up-to-date representations of the real world. Using the relationships in your custom DTDL models, you'll connect twins into a live graph representing your environment.
-You can visualize your Azure Digital Twins graph in [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md), which provides the following interface for interacting with your graph:
+## Contextualize IoT and business system data
+Digital models in Azure Digital Twins are live, up-to-date representations of the real world.
+
+To keep digital twin properties current against your environment, you can use [IoT Hub](../iot-hub/about-iot-hub.md) to connect your solution to IoT and IoT Edge devices. These hub-managed devices are represented as part of your twin graph, and provide the data that drives your model. You can create a new IoT Hub to use with Azure Digital Twins, or [connect an existing IoT Hub](how-to-ingest-iot-hub-data.md) along with the devices it already manages.
+
+You can also drive Azure Digital Twins from other data sources, using [REST APIs](concepts-apis-sdks.md) or connectors to other Azure services like [Logic Apps](../logic-apps/logic-apps-overview.md). These methods can help you input data from business systems and incorporate them into your twin graph.
+
+Azure Digital Twins provides a rich event system to keep your graph current, including data processing that can be customized to match your business logic. You can connect external compute resources, such as [Azure Functions](../azure-functions/functions-overview.md), to drive this data processing in flexible, customized ways.
-Azure Digital Twins provides a rich event system to keep that graph current with data processing and business logic. You can connect external compute resources, such as [Azure Functions](../azure-functions/functions-overview.md), to drive this data processing in flexible, customized ways.
+## Query for environment insights
-You can also extract insights from the live execution environment, using Azure Digital Twins' powerful *query APIΓÇï*. The API lets you query with extensive search conditions, including property values, relationships, relationship properties, model information, and more. You can also combine queries, gathering a broad range of insights about your environment and answering custom questions that are important to you.
+Azure Digital Twins provides a powerful query APIΓÇï to help you extract insights from the live execution environment. The API can query with extensive search conditions, including property values, relationships, relationship properties, model information, and more. You can also combine queries, gathering a broad range of insights about your environment and answering custom questions that are important to you. For more details about the language used to craft these queries, see [Query language](concepts-query-language.md).
-## Input from IoT and business systems
+## Visualize environment in 3D Scenes Studio (preview)
-To keep the live execution environment of Azure Digital Twins up to date with the real world, you can use [IoT Hub](../iot-hub/about-iot-hub.md) to connect your solution to IoT and IoT Edge devices. These hub-managed devices are represented as part of your twin graph, and provide the data that drives your model.
+Azure Digital Twins [3D Scenes Studio (preview)](concepts-3d-scenes-studio.md) is an immersive visual 3D environment, where end users can monitor, diagnose, and investigate operational digital twin data with the visual context of 3D assets. With a digital twin graph and curated 3D model, subject matter experts can leverage the studio's low-code builder to map the 3D elements to digital twins in the Azure Digital Twins graph, and define UI interactivity and business logic for a 3D visualization of a business environment. The 3D scenes can then be consumed in the hosted 3D Scenes Studio, or in a custom application that leverages the embeddable 3D viewer component.
-You can create a new IoT Hub for this purpose with Azure Digital Twins, or [connect an existing IoT Hub](how-to-ingest-iot-hub-data.md) along with the devices it already manages.
+Here's an example of a scene in 3D Scenes Studio, showing how digital twin properties can be visualized with 3D elements:
-You can also drive Azure Digital Twins from other data sources, using REST APIs or connectors to other services like [Logic Apps](../logic-apps/logic-apps-overview.md).
-## Output data for storage and analytics
+## Share twin data to other Azure services
The data in your Azure Digital Twins model can be routed to downstream Azure services for more analytics or storage.
Here are some things you can do with event routes in Azure Digital Twins:
Flexible egress of data is another way that Azure Digital Twins can connect into a larger solution, and support your custom needs for continued work with these insights.
-## Azure Digital Twins in a solution context
+## Sample solution architecture
Azure Digital Twins is commonly used in combination with other Azure services as part of a larger IoT solution.
-A sample architecture of a complete solution using Azure Digital Twins may contain the following components:
+A possible architecture of a complete solution using Azure Digital Twins may contain the following components:
* The Azure Digital Twins service instance. This service stores your twin models and your twin graph with its state, and orchestrates event processing. * One or more client apps that drive the Azure Digital Twins instance by configuring models, creating topology, and extracting insights from the twin graph. * One or more external compute resources to process events generated by Azure Digital Twins, or connected data sources such as devices. One common way to provide compute resources is via [Azure Functions](../azure-functions/functions-overview.md). * An IoT hub to provide device management and IoT data stream capabilities. * Downstream services to provide things like workflow integration (like Logic Apps), cold storage (like Azure Data Lake), or analytics (like Azure Data Explorer or Time Series Insights).
-The following diagram shows where Azure Digital Twins lies in the context of a larger Azure IoT solution.
+The following diagram shows where Azure Digital Twins might lie in the context of a larger sample Azure IoT solution.
:::image type="content" source="media/overview/solution-context.png" alt-text="Diagram showing input sources, output services, and two-way communication with both client apps and external compute resources." border="false" lightbox="media/overview/solution-context.png":::
dms Create Dms Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-resource-manager-template.md
Write-Host "Press [ENTER] to continue..."
For a step-by-step tutorial that guides you through the process of creating a template, see: > [!div class="nextstepaction"]
-> [ Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
+> [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
For other ways to deploy Azure Database Migration Service, see: - [Azure portal](quickstart-create-data-migration-service-portal.md)
event-hubs Event Processor Balance Partition Load https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-processor-balance-partition-load.md
Partition ownership records in the checkpoint store keep track of Event Hubs nam
| | | : | | | | | mynamespace.servicebus.windows.net | myeventhub | myconsumergroup | 844bd8fb-1f3a-4580-984d-6324f9e208af | 15 | 2020-01-15T01:22:00 |
-Each event processor instance acquires ownership of a partition and starts processing the partition from last known [checkpoint](# Checkpointing). If a processor fails (VM shuts down), then other instances detect it by looking at the last modified time. Other instances try to get ownership of the partitions previously owned by the inactive instance, and the checkpoint store guarantees that only one of the instances succeeds in claiming ownership of a partition. So, at any given point of time, there is at most one processor that receives events from a partition.
+Each event processor instance acquires ownership of a partition and starts processing the partition from last known [checkpoint](#checkpointing). If a processor fails (VM shuts down), then other instances detect it by looking at the last modified time. Other instances try to get ownership of the partitions previously owned by the inactive instance, and the checkpoint store guarantees that only one of the instances succeeds in claiming ownership of a partition. So, at any given point of time, there is at most one processor that receives events from a partition.
## Receive messages
expressroute Cross Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/cross-network-connectivity.md
The following table shows the route table of the private peering of the ExpressR
The following table shows the route table of the private peering of the ExpressRoute of Fabrikam Inc., after configuring Global Reach. See that the route table has routes belonging to both the on-premises networks.
-![Fabrikam ExpressRoute route table after Global Reach]( ./media/cross-network-connectivity/fabrikamexr-rt-gr.png )
+![Fabrikam ExpressRoute route table after Global Reach](./media/cross-network-connectivity/fabrikamexr-rt-gr.png)
## Next steps
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
If you are remote and do not have fiber connectivity or you want to explore othe
| **[EdgeConnex](https://www.edgeconnex.com/services/edge-data-centers-proximity-matters/)** | Megaport, PacketFabric | | **[Flexential](https://www.flexential.com/connectivity/cloud-connect-microsoft-azure-expressroute)** | IX Reach, Megaport, PacketFabric | | **[QTS Data Centers](https://www.qtsdatacenters.com/hybrid-solutions/connectivity/azure-cloud )** | Megaport, PacketFabric |
-| **[Stream Data Centers]( https://www.streamdatacenters.com/products-services/network-cloud/ )** | Megaport |
+| **[Stream Data Centers](https://www.streamdatacenters.com/products-services/network-cloud/)** | Megaport |
| **[RagingWire Data Centers](https://www.ragingwire.com/wholesale/wholesale-data-centers-worldwide-nexcenters)** | IX Reach, Megaport, PacketFabric | | **[T5 Datacenters](https://t5datacenters.com/)** | IX Reach | | **vXchnge** | IX Reach, Megaport |
governance Policy As Code Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/policy-as-code-github.md
resources, the quickstart articles explain how to do so.
### Export Azure Policy objects from the Azure portal
+ > [!NOTE]
+ > Owner permissions are required at the scope of the policy objects being exported to GitHub.
+ To export a policy definition from Azure portal, follow these steps: 1. Launch the Azure Policy service in the Azure portal by clicking **All services**, then searching
hdinsight Hdinsight Go Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-go-sdk-overview.md
ms.devlang: golang Previously updated : 01/03/2020 Last updated : 06/23/2022 # HDInsight SDK for Go (Preview)
hdinsight Hdinsight Hadoop Create Linux Clusters Arm Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-arm-templates.md
description: Learn how to create clusters for HDInsight by using Resource Manage
Previously updated : 04/07/2020 Last updated : 06/23/2022 # Create Apache Hadoop clusters in HDInsight by using Resource Manager templates
In this article, you have learned several ways to create an HDInsight cluster. T
* For an in-depth example of deploying an application, see [Provision and deploy microservices predictably in Azure](../app-service/deploy-complex-application-predictably.md). * For guidance on deploying your solution to different environments, see [Development and test environments in Microsoft Azure](../devtest-labs/devtest-lab-overview.md). * To learn about the sections of the Azure Resource Manager template, see [Authoring templates](../azure-resource-manager/templates/syntax.md).
-* For a list of the functions you can use in an Azure Resource Manager template, see [Template functions](../azure-resource-manager/templates/template-functions.md).
+* For a list of the functions you can use in an Azure Resource Manager template, see [Template functions](../azure-resource-manager/templates/template-functions.md).
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
Fixed issues represent selected issues that were previously logged via Hortonwor
- **OMS Portal:** We have removed the link from HDInsight resource page that was pointing to OMS portal. Azure Monitor logs initially used its own portal called the OMS portal to manage its configuration and analyze collected data. All functionality from this portal has been moved to the Azure portal where it will continue to be developed. HDInsight has deprecated the support for OMS portal. Customers will use HDInsight Azure Monitor logs integration in Azure portal. -- **Spark 2.3**-
- - <https://spark.apache.org/releases/spark-release-2-3-0.html#deprecations>
+- **Spark 2.3:** [Spark Release 2.3.0 deprecations](https://spark.apache.org/releases/spark-release-2-3-0.html#deprecations)
### ΓÇïUpgrading
hdinsight Hdinsight Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-troubleshoot-guide.md
Title: Azure HDInsight troubleshooting guides
description: Troubleshoot Azure HDInsight. Step-by-step documentation shows you how to use HDInsight to solve common problems with Apache Hive, Apache Spark, Apache YARN, Apache HBase, HDFS, and Apache Storm. Previously updated : 08/14/2019 Last updated : 06/23/2022 # Troubleshoot Azure HDInsight
hdinsight Apache Spark Intellij Tool Failure Debug https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-intellij-tool-failure-debug.md
Previously updated : 07/12/2019 Last updated : 06/23/2022 # Failure spark job debugging with Azure Toolkit for IntelliJ (preview)
Create a spark ScalaΓÇï/Java application, then run the application on a Spark cl
### Manage resources * [Manage resources for the Apache Spark cluster in Azure HDInsight](apache-spark-resource-manager.md)
-* [Track and debug jobs running on an Apache Spark cluster in HDInsight](apache-spark-job-debugging.md)
+* [Track and debug jobs running on an Apache Spark cluster in HDInsight](apache-spark-job-debugging.md)
hdinsight Apache Spark Job Debugging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-job-debugging.md
description: Use YARN UI, Spark UI, and Spark History server to track and debug
Previously updated : 04/23/2020 Last updated : 06/23/2022 # Debug Apache Spark jobs running on Azure HDInsight
healthcare-apis Centers For Medicare Tutorial Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/centers-for-medicare-tutorial-introduction.md
The Patient Access API describes adherence to four FHIR implementation guides:
The Provider Directory API describes adherence to one implementation guide:
-* [HL7 Da Vinci PDex Plan Network IG](http://build.fhir.org/ig/HL7/davinci-pdex-plan-net/): This implementation guide defines a FHIR interface to a health insurerΓÇÖs insurance plans, their associated networks, and the organizations and providers that participate in these networks.
+* [HL7 Da Vinci PDex Plan Network IG](https://build.fhir.org/ig/HL7/davinci-pdex-plan-net/): This implementation guide defines a FHIR interface to a health insurerΓÇÖs insurance plans, their associated networks, and the organizations and providers that participate in these networks.
## Touchstone
healthcare-apis Store Profiles In Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/store-profiles-in-fhir.md
# Store profiles in Azure API for FHIR
-HL7 Fast Healthcare Interoperability Resources (FHIR&#174;) defines a standard and interoperable way to store and exchange healthcare data. Even within the base FHIR specification, it can be helpful to define other rules or extensions based on the context that FHIR is being used. For such context-specific uses of FHIR, **FHIR profiles** are used for the extra layer of specifications.
-[FHIR profile](https://www.hl7.org/fhir/profiling.html) allows you to narrow down and customize resource definitions using constraints and extensions.
+HL7 Fast Healthcare Interoperability Resources (FHIR&#174;) defines a standard and interoperable way to store and exchange healthcare data. Even within the base FHIR specification, it can be helpful to define other rules or extensions based on the context that FHIR is being used. For such context-specific uses of FHIR, **FHIR profiles** are used for the extra layer of specifications. [FHIR profile](https://www.hl7.org/fhir/profiling.html) allows you to narrow down and customize resource definitions using constraints and extensions.
Azure API for FHIR allows validating resources against profiles to see if the resources conform to the profiles. This article guides you through the basics of FHIR profiles and how to store them. For more information about FHIR profiles outside of this article, visit [HL7.org](https://www.hl7.org/fhir/profiling.html).
When a resource conforms to a profile, the profile is specified inside the `prof
> [!NOTE] > Profiles must build on top of the base resource and cannot conflict with the base resource. For example, if an element has a cardinality of 1..1, the profile cannot make it optional.
-Profiles are also specified by various Implementation Guides (IGs). Some common IGs are listed below. You can go to the specific IG site to learn more about the IG and the profiles defined within it.
+Profiles are also specified by various Implementation Guides (IGs). Some common IGs are listed below. For more information, visit the specific IG site to learn more about the IG and the profiles defined within it:
-|Name |URL
-|- |-
-Us Core |<https://www.hl7.org/fhir/us/core/>
-CARIN Blue Button |<http://hl7.org/fhir/us/carin-bb/>
-Da Vinci Payer Data Exchange |<http://hl7.org/fhir/us/davinci-pdex/>
-Argonaut |<http://www.fhir.org/guides/argonaut/pd/>
+- [US Core](https://www.hl7.org/fhir/us/core/)
+- [CARIN Blue Button](https://hl7.org/fhir/us/carin-bb)
+- [Da Vinci Payer Data Exchange](https://hl7.org/fhir/us/davinci-pdex)
+- [Argonaut](https://www.fhir.org/guides/argonaut/pd/)
> [!NOTE] > The Azure API for FHIR does not store any profiles from implementation guides by default. You will need to load them into the Azure API for FHIR.
healthcare-apis Centers For Medicare Tutorial Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/centers-for-medicare-tutorial-introduction.md
The Patient Access API describes adherence to four FHIR implementation guides:
The Provider Directory API describes adherence to one implementation guide:
-* [HL7 Da Vinci PDex Plan Network IG](http://build.fhir.org/ig/HL7/davinci-pdex-plan-net/): This implementation guide defines a FHIR interface to a health insurerΓÇÖs insurance plans, their associated networks, and the organizations and providers that participate in these networks.
+* [HL7 Da Vinci PDex Plan Network IG](https://build.fhir.org/ig/HL7/davinci-pdex-plan-net/): This implementation guide defines a FHIR interface to a health insurerΓÇÖs insurance plans, their associated networks, and the organizations and providers that participate in these networks.
## Touchstone
healthcare-apis Fhir Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-faq.md
We have a basic SMART on FHIR proxy as part of the managed service. If this does
### Can I create a custom FHIR resource?
-We don't allow custom FHIR resources. If you need a custom FHIR resource, you can build a custom resource on top of the [Basic resource](http://www.hl7.org/fhir/basic.html) with extensions.
+We don't allow custom FHIR resources. If you need a custom FHIR resource, you can build a custom resource on top of the [Basic resource](https://www.hl7.org/fhir/basic.html) with extensions.
### Are [extensions](https://www.hl7.org/fhir/extensibility.html) supported on the FHIR service?
healthcare-apis Store Profiles In Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/store-profiles-in-fhir.md
When a resource conforms to a profile, the profile is specified inside the `prof
> [!NOTE] > Profiles must build on top of the base resource and cannot conflict with the base resource. For example, if an element has a cardinality of 1..1, the profile cannot make it optional.
-Profiles are also specified by various Implementation Guides (IGs). Some common IGs are listed below. You can go to the specific IG site to learn more about the IG and the profiles defined within it.
-
-|Name |URL
-|- |-
-Us Core |<https://www.hl7.org/fhir/us/core/>
-CARIN Blue Button |<http://hl7.org/fhir/us/carin-bb/>
-Da Vinci Payer Data Exchange |<http://hl7.org/fhir/us/davinci-pdex/>
-Argonaut |<http://www.fhir.org/guides/argonaut/pd/>
+Profiles are also specified by various Implementation Guides (IGs). Some common IGs are listed below. For more information, visit the specific IG site to learn more about the IG and the profiles defined within it:
+
+- [US Core](https://www.hl7.org/fhir/us/core/)
+- [CARIN Blue Button](https://hl7.org/fhir/us/carin-bb)
+- [Da Vinci Payer Data Exchange](https://hl7.org/fhir/us/davinci-pdex)
+- [Argonaut](https://www.fhir.org/guides/argonaut/pd/)
> [!NOTE] > The FHIR service does not store any profiles from implementation guides by default. You will need to load them into the FHIR service.
healthcare-apis Healthcare Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-overview.md
Azure Health Data Services now includes support for DICOM service. DICOM enables
For the secure exchange of FHIR data, Azure Health Data Services offers a few incremental capabilities that aren't available in Azure API for FHIR.
-* **Support for transactions**: In Azure Health Data Services, the FHIR service supports transaction bundles. For more information about transaction bundles, visit [HL7.org](http://www.hl7.org/) and refer to batch/transaction interactions.
+* **Support for transactions**: In Azure Health Data Services, the FHIR service supports transaction bundles. For more information about transaction bundles, visit [HL7.org](https://www.hl7.org/) and refer to batch/transaction interactions.
* [Chained Search Improvements](./././fhir/overview-of-search.md#chained--reverse-chained-searching): Chained Search & Reserve Chained Search are no longer limited by 100 items per sub query. * The $convert-data operation can now transform JSON objects to FHIR R4. * Events: Trigger new workflows when resources are created, updated, or deleted in a FHIR service.
import-export Storage Import Export Data From Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-from-blobs.md
Perform the following steps to order an import job in Azure Import/Export via th
1. Select the **Destination country/region** for the job. 1. Then select **Apply**.
- [ ![Screenshot of Get Started options for a new export order in Azure Import/Export's Preview portal. The Export From Azure transfer type and the Apply button are highlighted.](./media/storage-import-export-data-from-blobs/import-export-order-preview-03-export-job.png) ](./media/storage-import-export-data-from-blobs/import-export-order-preview-03-export-job.png#lightbox)
+ [![Screenshot of Get Started options for a new export order in Azure Import/Export's Preview portal. The Export From Azure transfer type and the Apply button are highlighted.](./media/storage-import-export-data-from-blobs/import-export-order-preview-03-export-job.png)](./media/storage-import-export-data-from-blobs/import-export-order-preview-03-export-job.png#lightbox)
1. Choose the **Select** button for **Import/Export Job**.
Perform the following steps to order an import job in Azure Import/Export via th
You can select **Go to resource** to open the **Overview** of the job.
- [ ![Screenshot showing the Overview pane for an Azure Import Export job in Created state in the Preview portal.](./media/storage-import-export-data-from-blobs/import-export-order-preview-12-export-job.png) ](./media/storage-import-export-data-from-blobs/import-export-order-preview-12-export-job.png#lightbox)
+ [![Screenshot showing the Overview pane for an Azure Import Export job in Created state in the Preview portal.](./media/storage-import-export-data-from-blobs/import-export-order-preview-12-export-job.png)](./media/storage-import-export-data-from-blobs/import-export-order-preview-12-export-job.png#lightbox)
# [Portal (Classic)](#tab/azure-portal-classic) Perform the following steps to create an export job in the Azure portal using the classic Azure Import/Export service.
-1. Log on to <https://portal.azure.com/>.
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. Search for **import/export jobs**. ![Screenshot of the Search box at the top of the Azure Portal home page. A search key for the Import Export Jobs Service is entered in the Search box.](../../includes/media/storage-import-export-classic-import-steps/import-to-blob-1.png)
You can use the copy logs from the job to verify that all data transferred succe
To find the log locations, open the job in the [Azure portal/](https://portal.azure.com/). The **Data copy details** show the **Copy log path** and **Verbose log path** for each drive that was included in the order.
-[ ![Screenshot showing a completed export job in Azure Import Export. In Data Copy Details, the Copy Log Path and Verbose Log Path are highlighted.](./media/storage-import-export-data-from-blobs/import-export-status-export-order-completed.png) ](./media/storage-import-export-data-from-blobs/import-export-status-export-order-completed.png#lightbox)
+[![Screenshot showing a completed export job in Azure Import Export. In Data Copy Details, the Copy Log Path and Verbose Log Path are highlighted.](./media/storage-import-export-data-from-blobs/import-export-status-export-order-completed.png)](./media/storage-import-export-data-from-blobs/import-export-status-export-order-completed.png#lightbox)
At this time, you can delete the job or leave it. Jobs automatically get deleted after 90 days.
iot-central How To Connect Devices X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-devices-x509.md
Title: Connect devices with X.509 certificates in an Azure IoT Central applicati
description: How to connect devices with X.509 certificates using Node.js device SDK for IoT Central Application Previously updated : 06/30/2021 Last updated : 06/15/2022
iot-central Howto Administer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-administer.md
Title: Change Azure IoT Central application settings | Microsoft Docs
description: Learn how to manage your Azure IoT Central application by changing application name, URL, upload image, and delete an application Previously updated : 12/28/2021 Last updated : 06/22/2022
iot-central Howto Authorize Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-authorize-rest-api.md
Title: Authorize REST API in Azure IoT Central
description: How to authenticate and authorize IoT Central REST API calls Previously updated : 12/27/2021 Last updated : 06/22/2022
iot-central Howto Build Iotc Device Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-build-iotc-device-bridge.md
Previously updated : 12/21/2021 Last updated : 06/22/2022 custom: contperf-fy22q3
iot-central Howto Configure File Uploads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-configure-file-uploads.md
description: How to configure file uploads from your devices to the cloud. After
Previously updated : 12/22/2021 Last updated : 06/22/2022
iot-central Howto Configure Rules Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-configure-rules-advanced.md
Title: Use workflows to integrate your Azure IoT Central application with other
description: This how-to article shows you, as a builder, how to configure rules and actions that integrate your IoT Central application with other cloud services. To create an advanced rule, you use an IoT Central connector in either Power Automate or Azure Logic Apps. Previously updated : 12/21/2021 Last updated : 06/21/2022
iot-central Howto Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-configure-rules.md
Title: Configure rules and actions in Azure IoT Central | Microsoft Docs
description: This how-to article shows you, as a builder, how to configure telemetry-based rules and actions in your Azure IoT Central application. Previously updated : 12/27/2021 Last updated : 06/22/2022
iot-central Howto Connect Eflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-eflow.md
Title: Azure IoT Edge for Linux on Windows (EFLOW) with IoT Central | Microsoft
description: Learn how to connect Azure IoT Edge for Linux on Windows (EFLOW) with IoT Central Previously updated : 11/09/2021 Last updated : 06/16/2022
iot-central Howto Connect Rigado Cascade 500 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-rigado-cascade-500.md
-- Previously updated : 08/18/2021++ Last updated : 06/15/2022 # This article applies to solution builders.
iot-central Howto Control Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-control-devices-with-rest-api.md
Title: Use the REST API to manage devices in Azure IoT Central
description: How to use the IoT Central REST API to control devices in an application Previously updated : 12/28/2021 Last updated : 06/20/2022
iot-central Howto Create Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-analytics.md
Title: Analyze device data in your Azure IoT Central application | Microsoft Doc
description: Analyze device data in your Azure IoT Central application. Previously updated : 12/21/2021 Last updated : 06/21/2022
iot-central Howto Create And Manage Applications Csp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-and-manage-applications-csp.md
Previously updated : 08/28/2021 Last updated : 06/15/2022
iot-central Howto Create Custom Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-custom-analytics.md
Title: Extend Azure IoT Central with custom analytics | Microsoft Docs
description: As a solution developer, configure an IoT Central application to do custom analytics and visualizations. This solution uses Azure Databricks. Previously updated : 12/21/2021 Last updated : 06/21/2022
iot-central Howto Create Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-custom-rules.md
Title: Extend Azure IoT Central with custom rules and notifications | Microsoft
description: As a solution developer, configure an IoT Central application to send email notifications when a device stops sending telemetry. This solution uses Azure Stream Analytics, Azure Functions, and SendGrid. Previously updated : 12/21/2021 Last updated : 06/21/2022
iot-central Howto Create Iot Central Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-iot-central-application.md
Previously updated : 12/22/2021 Last updated : 06/20/2022
iot-central Howto Create Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-organizations.md
Previously updated : 12/27/2021 Last updated : 06/21/2022
iot-central Howto Edit Device Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-edit-device-template.md
Title: Edit a device template in your Azure IoT Central application | Microsoft
description: Iterate over your device templates without impacting your live connected devices Previously updated : 12/22/2021 Last updated : 06/22/2022
iot-central Howto Export Data Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-export-data-legacy.md
description: How to export data from your Azure IoT Central application to Azure
Previously updated : 01/06/2022 Last updated : 06/20/2022
iot-central Howto Manage Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-dashboards.md
Title: Create and manage Azure IoT Central dashboards | Microsoft Docs
description: Learn how to create and manage application and personal dashboards in Azure IoT Central. Previously updated : 12/28/2021 Last updated : 06/20/2022
iot-central Howto Manage Data Export With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-data-export-with-rest-api.md
Title: Use the REST API to manage data export in Azure IoT Central
description: How to use the IoT Central REST API to manage data export in an application Previously updated : 10/18/2021 Last updated : 06/15/2022
iot-central Howto Manage Device Templates With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md
Title: Use the REST API to add device templates in Azure IoT Central
description: How to use the IoT Central REST API to add device templates in an application Previously updated : 12/17/2021 Last updated : 06/17/2022
iot-central Howto Manage Devices In Bulk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-in-bulk.md
Previously updated : 12/27/2021 Last updated : 06/22/2022
iot-central Howto Manage Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-with-rest-api.md
Title: How to use the IoT Central REST API to manage devices
description: How to use the IoT Central REST API to add devices in an application Previously updated : 12/18/2021 Last updated : 06/22/2022
iot-central Howto Manage Iot Central From Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-from-cli.md
Previously updated : 12/27/2021 Last updated : 06/20/2022
iot-central Howto Manage Iot Central From Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-from-portal.md
Previously updated : 12/27/2021 Last updated : 06/22/2022
iot-central Howto Manage Iot Central With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-with-rest-api.md
Previously updated : 10/25/2021 Last updated : 06/15/2022
iot-central Howto Manage Jobs With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-jobs-with-rest-api.md
Title: Use the REST API to manage jobs in Azure IoT Central
description: How to use the IoT Central REST API to create and manage jobs in an application Previously updated : 01/05/2022 Last updated : 06/20/2022
iot-central Howto Manage Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-preferences.md
Title: Manage your personal preferences on IoT Central | Microsoft Docs
description: How to manage your personal application preferences such as changing language, theme, and default organization in your IoT Central application. Previously updated : 01/04/2022 Last updated : 06/22/2022
iot-central Howto Manage Users Roles With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles-with-rest-api.md
Title: Use the REST API to manage users and roles in Azure IoT Central
description: How to use the IoT Central REST API to manage users and roles in an application Previously updated : 08/30/2021 Last updated : 06/16/2022
iot-central Howto Manage Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles.md
Title: Manage users and roles in Azure IoT Central application | Microsoft Docs
description: As an administrator, how to manage users and roles in your Azure IoT Central application Previously updated : 12/22/2021 Last updated : 06/22/2022
iot-central Howto Map Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-map-data.md
Title: Transform telemetry on ingress to IoT Central | Microsoft Docs
description: To use complex telemetry from devices, you can use mappings to transform it as it arrives in your IoT Central application. This article describes how to map device telemetry on ingress to IoT Central. Previously updated : 11/22/2021 Last updated : 06/17/2022
iot-central Howto Monitor Devices Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-monitor-devices-azure-cli.md
Title: Monitor device connectivity using the Azure IoT Central Explorer
description: Monitor device messages and observe device twin changes through the IoT Central Explorer CLI. Previously updated : 08/30/2021 Last updated : 06/16/2022 ms.tool: azure-cli
iot-central Howto Query With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-query-with-rest-api.md
Title: Use the REST API to query devices in Azure IoT Central
description: How to use the IoT Central REST API to query devices in an application Previously updated : 10/12/2021 Last updated : 06/14/2022
iot-central Howto Set Up Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-set-up-template.md
Title: Define a new IoT device type in Azure IoT Central | Microsoft Docs
description: This article shows you how to create a new Azure IoT device template in your Azure IoT Central application. You define the telemetry, state, properties, and commands for your type. Previously updated : 12/22/2021 Last updated : 06/22/2022
iot-central Howto Transform Data Internally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data-internally.md
Title: Transform data inside Azure IoT Central | Microsoft Docs
description: IoT devices send data in various formats that you may need to transform. This article describes how to transform data in an IoT Central before exporting it. Previously updated : 10/28/2021 Last updated : 06/15/2022
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data.md
Title: Transform data for Azure IoT Central | Microsoft Docs
description: IoT devices send data in various formats that you may need to transform. This article describes how to transform data both on the way into IoT Central and on the way out. The scenarios described use IoT Edge and Azure Functions. Previously updated : 12/27/2021 Last updated : 06/24/2022
To set up the data export to send data to your Device bridge:
### Verify
-The sample device you use to test the scenario is written in Node.js. Make sure you have Node.js and NPM installed on your local machine. If you don't want to install these prerequisites, use the[Azure Cloud Shell](https://shell.azure.com/) that has them preinstalled.
+The sample device you use to test the scenario is written in Node.js. Make sure you have Node.js and NPM installed on your local machine. If you don't want to install these prerequisites, use the [Azure Cloud Shell](https://shell.azure.com/) that has them preinstalled.
To run a sample device that tests the scenario:
iot-central Howto Use Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-commands.md
Title: How to use device commands in an Azure IoT Central solution
description: How to use device commands in Azure IoT Central solution. This tutorial shows you how to use device commands in client app to your Azure IoT Central application. Previously updated : 12/27/2021 Last updated : 06/22/2022
iot-central Howto Use Location Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-location-data.md
Title: Use location data in an Azure IoT Central solution
description: Learn how to use location data sent from a device connected to your IoT Central application. Plot location data on a map or create geofencing rules. Previously updated : 12/27/2021 Last updated : 06/22/2022
iot-central Howto Use Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-properties.md
Title: Use properties in an Azure IoT Central solution
description: Learn how to use read-only and writable properties in an Azure IoT Central solution. Previously updated : 12/21/2021 Last updated : 06/21/2022
iot-edge How To Deploy At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-at-scale.md
The **Priority** and **Time to live** parameters are optional parameters that yo
For more information about how to create routes, see [Declare routes](module-composition.md#declare-routes).
+Select **Next: Target Devices**.
+
+### Step 4: Target devices
+
+Use the tags property from your devices to target the specific devices that should receive this deployment.
+
+Since multiple deployments may target the same device, you should give each deployment a priority number. If there's ever a conflict, the deployment with the highest priority (larger values indicate higher priority) wins. If two deployments have the same priority number, the one that was created most recently wins.
+
+If multiple deployments target the same device, then only the one with the higher priority is applied. If multiple layered deployments target the same device then they are all applied. However, if any properties are duplicated, like if there are two routes with the same name, then the one from the higher priority layered deployment overwrites the rest.
+
+Any layered deployment targeting a device must have a higher priority than the base deployment in order to be applied.
+
+1. Enter a positive integer for the deployment **Priority**.
+1. Enter a **Target condition** to determine which devices will be targeted with this deployment. The condition is based on device twin tags or device twin reported properties and should match the expression format. For example, `tags.environment='test'` or `properties.reported.devicemodel='4000x'`.
+ Select **Next: Metrics**.
-### Step 4: Metrics
+### Step 5: Metrics
Metrics provide summary counts of the various states that a device may report back as a result of applying configuration content.
Metrics provide summary counts of the various states that a device may report ba
WHERE properties.reported.lastDesiredStatus.code = 200 ```
-Select **Next: Target Devices**.
-
-### Step 5: Target devices
-
-Use the tags property from your devices to target the specific devices that should receive this deployment.
-
-Since multiple deployments may target the same device, you should give each deployment a priority number. If there's ever a conflict, the deployment with the highest priority (larger values indicate higher priority) wins. If two deployments have the same priority number, the one that was created most recently wins.
-
-If multiple deployments target the same device, then only the one with the higher priority is applied. If multiple layered deployments target the same device then they are all applied. However, if any properties are duplicated, like if there are two routes with the same name, then the one from the higher priority layered deployment overwrites the rest.
-
-Any layered deployment targeting a device must have a higher priority than the base deployment in order to be applied.
-
-1. Enter a positive integer for the deployment **Priority**.
-1. Enter a **Target condition** to determine which devices will be targeted with this deployment. The condition is based on device twin tags or device twin reported properties and should match the expression format. For example, `tags.environment='test'` or `properties.reported.devicemodel='4000x'`.
- Select **Next: Review + Create** to move on to the final step. ### Step 6: Review and create
iot-edge Iot Edge For Linux On Windows Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-support.md
Azure IoT Edge for Linux on Windows supports the following architectures:
For more information about Windows ARM64 supported processors, see [Windows Processor Requirements](/windows-hardware/design/minimum/windows-processor-requirements).
-## Virtual machines
+## Nested virtualization
-Azure IoT Edge for Linux on Windows can run in Windows virtual machines. Using a virtual machine as an IoT Edge device is common when customers want to augment existing infrastructure with edge intelligence. In order to run the EFLOW virtual machine inside a Windows VM, the host VM must support nested virtualization. There are two forms of nested virtualization compatible with Azure IoT Edge for Linux on Windows. Users can choose to deploy through a local VM or Azure VM. For more information, see [EFLOW Nested virtualization](./nested-virtualization.md).
+Azure IoT Edge for Linux on Windows (EFLOW) can run in Windows virtual machines. Using a virtual machine as an IoT Edge device is common when customers want to augment existing infrastructure with edge intelligence. In order to run the EFLOW virtual machine inside a Windows VM, the host VM must support nested virtualization. EFLOW supports the following nested virtualization scenarios:
+
+| Version | Hyper-V VM | Azure VM | VMware ESXi VM | Other Hypervisor |
+| - | -- | -- | -- | -- |
+| EFLOW 1.1 LTS | ![1.1LTS](./media/support/green-check.png) | ![1.1LTS](./media/support/green-check.png) | ![1.1LTS](./media/support/green-check.png) | - |
+| EFLOW Continuous Release (CR) ([Public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)) | ![CR](./media/support/green-check.png) | ![CR](./media/support/green-check.png) | ![CR](./media/support/green-check.png) | - |
+
+For more information, see [EFLOW Nested virtualization](./nested-virtualization.md).
### VMware virtual machine Azure IoT Edge for Linux on Windows supports running inside a Windows virtual machine running on top of [VMware ESXi](https://www.vmware.com/products/esxi-and-esx.html) product family. Specific networking and virtualization configurations are needed to support this scenario. For more information about VMware configuration, see [EFLOW Nested virtualization](./nested-virtualization.md).
iot-hub Iot Hub Automatic Device Management Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-automatic-device-management-cli.md
Automatic device management works by updating a set of device twins or module tw
* The **metrics** define the summary counts of various configuration states such as **Success**, **In Progress**, and **Error**. Custom metrics are specified as queries on twin reported properties. System metrics are the default metrics that measure twin update status, such as the number of twins that are targeted and the number of twins that have been successfully updated.
-Automatic configurations run for the first time shortly after the configuration is created and then at five minute intervals. Metrics queries run each time the automatic configuration runs.
+Automatic configurations run for the first time shortly after the configuration is created and then at five minute intervals. Metrics queries run each time the automatic configuration runs. A maximum of 100 automatic configurations is supported on standard tier IoT hubs; ten on free tier IoT hubs. Throttling limits also apply. To learn more, see [Quotas and Throttling](iot-hub-devguide-quotas-throttling.md).
## CLI prerequisites
Metric queries for modules are also similar to queries for devices, but you sele
## Create a configuration
-You configure target devices by creating a configuration that consists of the target content and metrics.
+You can create a maximum of 100 automatic configurations on standard tier IoT hubs; ten on free tier IoT hubs. To learn more, see [Quotas and Throttling](iot-hub-devguide-quotas-throttling.md).
-Use the following command to create a configuration:
+You configure target devices by creating a configuration that consists of the target content and metrics. Use the following command to create a configuration:
```azurecli az iot hub configuration create --config-id [configuration id] \
iot-hub Iot Hub Automatic Device Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-automatic-device-management.md
Automatic device management works by updating a set of device twins or module tw
* The **metrics** define the summary counts of various configuration states such as **Success**, **In Progress**, and **Error**. Custom metrics are specified as queries on twin reported properties. System metrics are the default metrics that measure twin update status, such as the number of twins that are targeted and the number of twins that have been successfully updated.
-Automatic configurations run for the first time shortly after the configuration is created and then at five minute intervals. Metrics queries run each time the automatic configuration runs.
+Automatic configurations run for the first time shortly after the configuration is created and then at five minute intervals. Metrics queries run each time the automatic configuration runs. A maximum of 100 automatic configurations is supported on standard tier IoT hubs; ten on free tier IoT hubs. Throttling limits also apply. To learn more, see [Quotas and Throttling](iot-hub-devguide-quotas-throttling.md).
## Implement twins
Before you create a configuration, you must specify which devices or modules you
## Create a configuration
+You can create a maximum of 100 automatic configurations on standard tier IoT hubs; ten on free tier IoT hubs. To learn more, see [Quotas and Throttling](iot-hub-devguide-quotas-throttling.md).
+ 1. In the [Azure portal](https://portal.azure.com), go to your IoT hub. 2. Select **Configurations** in the left navigation pane.
iot-hub Tutorial X509 Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-x509-scripts.md
Microsoft provides PowerShell and Bash scripts to help you understand how to cre
### Step 1 - Setup
-Get OpenSSL for Windows. See <https://www.openssl.org/docs/faq.html#MISC4> for places to download it or <https://www.openssl.org/source/> to build from source. Then run the preliminary scripts:
+Download [OpenSSL for Windows](https://www.openssl.org/docs/faq.html#MISC4) or [build it from source](https://www.openssl.org/source/). Then run the preliminary scripts:
1. Copy the scripts from this GitHub [repository](https://github.com/Azure/azure-iot-sdk-c/tree/master/tools/CACertificates) into the local directory in which you want to work. All files will be created as children of this directory.
lab-services Create And Configure Labs Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/create-and-configure-labs-admin.md
+
+ Title: Configure regions for labs
+description: Learn how to change the region of a lab.
++++ Last updated : 06/17/2022+++
+# Configure regions for labs
+
+This article shows you how to configure the locations where you can create labs by enabling or disabling regions associated with the lab plan. Enabling a region allows lab creators to create labs within that region. You cannot create labs in disabled regions.
+
+When you create a lab plan, you have to set an initial region for the labs, but you can enable or disable more regions for your lab at any time. If you create a lab plan by using the Azure portal, enable regions initially includes the same region as the location of the lab plan. If you create a lab plan by using the API or SDKs, you must set the AllowedRegion property when you create the lab plan.
+
+You might need to change the region for your labs in these circumstances:
+- Regulatory compliance. Choosing where your data resides, such as organizations choosing specific regions to help ensure that they are compliant with local regulations.
+- Service availability. Providing the optimal lab experience for your students by ensuring the Azure Lab Service is available in the region closest to them. For more information about service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=lab-services).
+- New region. You may acquire quota in a region different than the regions already enabled.
+
+## Prerequisites
+
+- To perform these steps, you must have an existing lab plan.
+
+## Enable regions
+
+To enable one or more regions after lab creation, follow these steps:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your lab plan.
+1. On the Lab plan overview page, select **Lab settings** from the left menu or select **Adjust settings**. Both go to the Lab settings page.
+ :::image type="content" source="./media/create-and-configure-labs-educator/lab-plan-overview-page.png" alt-text="Screenshot that shows the Lab overview page with Lab settings and Adjust settings highlighted.":::
+1. On the Lab settings page, under **Location selection**, select **Select regions**.
+ :::image type="content" source="./media/create-and-configure-labs-educator/lab-settings-page.png" alt-text="Screenshot that shows the Lab settings page with Select regions highlighted.":::
+1. On the Enabled regions page, select the region(s) you want to add, select **Enable**.
+ :::image type="content" source="./media/create-and-configure-labs-educator/enabled-regions-page.png" alt-text="Screenshot that shows the Enabled regions page with Enable and Apply highlighted.":::
+1. Enabled regions are displayed at the top of the list. Check that your desired regions are displayed and then select **Apply** to confirm your selection.
+ > [!NOTE]
+ > There are two steps to saving your enabled regions:
+ > - At the top of the regions list select **Enable**
+ > - At the bottom of the page, select **Apply**
+1. On the Lab settings page, verify that the regions you enabled are listed and then select **Save**.
+ :::image type="content" source="./media/create-and-configure-labs-educator/newly-enabled-regions.png" alt-text="Screenshot that shows the Lab settings page with the newly selected regions highlighted.":::
+ > [!NOTE]
+ > You must select **Save** to save your lab plan configuration.
+
+## Disable regions
+
+To disable one or more regions after lab creation, follow these steps:
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your lab plan.
+1. On the Lab plan overview page, select **Lab settings** from the left menu or select **Adjust settings**. Both go to the Lab settings page.
+ :::image type="content" source="./media/create-and-configure-labs-educator/lab-plan-overview-page.png" alt-text="Screenshot that shows the Lab overview page with Lab settings and Adjust settings highlighted.":::
+1. On the Lab settings page, under **Location selection**, select **Select regions**.
+ :::image type="content" source="./media/create-and-configure-labs-educator/lab-settings-page.png" alt-text="Screenshot that shows the Lab settings page with Select regions highlighted.":::
+1. On the Enabled regions page, clear the check box of the region(s) you want to disable, select **Disable**.
+ :::image type="content" source="./media/create-and-configure-labs-educator/disable-regions-page.png" alt-text="Screenshot that shows the Enabled regions page with Disable and Apply highlighted.":::
+1. Enabled regions are displayed at the top of the list. Check that your desired regions are displayed and then select **Apply** to confirm your selection.
+ > [!NOTE]
+ > There are two steps to saving your disabled regions:
+ > - At the top of the regions list select **Disable**
+ > - At the bottom of the page, select **Apply**
+1. On the Lab settings page, verify that the regions you enabled are listed and then select **Save**.
+ :::image type="content" source="./media/create-and-configure-labs-educator/newly-enabled-regions.png" alt-text="Screenshot that shows the Lab settings page with the newly selected regions highlighted.":::
+ > [!NOTE]
+ > You must select **Save** to save your lab plan configuration.
+
+## Next steps
+
+- Learn how to choose the right regions for your Lab plan at [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/#overview).
+- Check [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=lab-services) For Azure Lab Services availability near you.
lab-services How To Configure Student Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-student-usage.md
When students use the registration link to sign in to a classroom, they're promp
Here's a link for students to [sign up for a Microsoft account](http://signup.live.com). > [!IMPORTANT]
-> When students sign in to a lab, they aren't given the option to create a Microsoft account. For this reason, we recommend that you include this sign-up link, <http://signup.live.com>, in the lab registration email that you send to students who are using non-Microsoft accounts.
+> When students sign in to a lab, they aren't given the option to create a Microsoft account. For this reason, we recommend that you include this sign-up link, `http://signup.live.com`, in the lab registration email that you send to students who are using non-Microsoft accounts.
### Use a GitHub account
load-balancer Load Balancer Standard Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-availability-zones.md
For an internal load balancer frontend, add a **zones** parameter to the interna
## Non-Zonal
-Load Balancers can also be created in a non-zonal configuration by use of a "no-zone" frontend (Public IP or Public IP Prefix). This option does not give a guarantee of redundancy. Note that all Public IP addresses that are [upgraded](../virtual-network/ip-services/public-ip-upgrade-portal.md) will be of type "no-zone".
+Load Balancers can also be created in a non-zonal configuration by use of a "no-zone" frontend (a public IP or public IP prefix in the case of a public load balancer; a private IP in the case of an internal load balancer). This option does not give a guarantee of redundancy. Note that all public IP addresses that are [upgraded](../virtual-network/ip-services/public-ip-upgrade-portal.md) will be of type "no-zone".
## <a name="design"></a> Design considerations
Using multiple frontends allow you to load balance traffic on more than one port
### Transition between regional zonal models
-In the case where a region is augmented to have [availability zones](../availability-zones/az-overview.md), any existing Public IPs (e.g., used for Load Balancer frontends) would remain non-zonal. In order to ensure your architecture can take advantage of the new zones, it is recommended that new frontend IPs be created, and the appropriate rules and configurations be replicated to utilize these new public IPs.
+In the case where a region is augmented to have [availability zones](../availability-zones/az-overview.md), any existing IPs (e.g., used for load balancer frontends) would remain non-zonal. In order to ensure your architecture can take advantage of the new zones, it is recommended that new frontend IPs be created, and the appropriate rules and configurations be replicated to utilize these new IPs.
### Control vs data plane implications
machine-learning Concept Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md
These settings can be applied to the best model as a result of your automated ML
|**Enable/disable ONNX model compatibility**|Γ£ô|| |**Test the model** | Γ£ô| Γ£ô (preview)|
-### Run control settings
+### Job control settings
-These settings allow you to review and control your experiment runs and its child runs.
+These settings allow you to review and control your experiment jobs and its child jobs.
| |The Python SDK|The studio web experience| |-|:-:|:-:|
-|**Run summary table**| Γ£ô|Γ£ô|
-|**Cancel runs & child runs**| Γ£ô|Γ£ô|
+|**Job summary table**| Γ£ô|Γ£ô|
+|**Cancel jobs & child jobs**| Γ£ô|Γ£ô|
|**Get guardrails**| Γ£ô|Γ£ô| ## When to use AutoML: classification, regression, forecasting, computer vision & NLP
With this capability you can:
* Download or deploy the resulting model as a web service in Azure Machine Learning. * Operationalize at scale, leveraging Azure Machine Learning [MLOps](concept-model-management-and-deployment.md) and [ML Pipelines](concept-ml-pipelines.md) capabilities.
-Authoring AutoML models for vision tasks is supported via the Azure ML Python SDK. The resulting experimentation runs, models, and outputs can be accessed from the Azure Machine Learning studio UI.
+Authoring AutoML models for vision tasks is supported via the Azure ML Python SDK. The resulting experimentation jobs, models, and outputs can be accessed from the Azure Machine Learning studio UI.
Learn how to [set up AutoML training for computer vision models](how-to-auto-train-image-models.md).
Instance segmentation | Tasks to identify objects in an image at the pixel level
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
-Support for natural language processing (NLP) tasks in automated ML allows you to easily generate models trained on text data for text classification and named entity recognition scenarios. Authoring automated ML trained NLP models is supported via the Azure Machine Learning Python SDK. The resulting experimentation runs, models, and outputs can be accessed from the Azure Machine Learning studio UI.
+Support for natural language processing (NLP) tasks in automated ML allows you to easily generate models trained on text data for text classification and named entity recognition scenarios. Authoring automated ML trained NLP models is supported via the Azure Machine Learning Python SDK. The resulting experimentation jobs, models, and outputs can be accessed from the Azure Machine Learning studio UI.
The NLP capability supports:
Using **Azure Machine Learning**, you can design and run your automated ML train
1. **Configure the compute target for model training**, such as your [local computer, Azure Machine Learning Computes, remote VMs, or Azure Databricks](how-to-set-up-training-targets.md). 1. **Configure the automated machine learning parameters** that determine how many iterations over different models, hyperparameter settings, advanced preprocessing/featurization, and what metrics to look at when determining the best model.
-1. **Submit the training run.**
+1. **Submit the training job.**
1. **Review the results**
The following diagram illustrates this process.
![Automated Machine learning](./media/concept-automated-ml/automl-concept-diagram2.png)
-You can also inspect the logged run information, which [contains metrics](how-to-understand-automated-ml.md) gathered during the run. The training run produces a Python serialized object (`.pkl` file) that contains the model and data preprocessing.
+You can also inspect the logged job information, which [contains metrics](how-to-understand-automated-ml.md) gathered during the job. The training job produces a Python serialized object (`.pkl` file) that contains the model and data preprocessing.
While model building is automated, you can also [learn how important or relevant features are](./v1/how-to-configure-auto-train-v1.md#explain) to the generated models.
The web interface for automated ML always uses a remote [compute target](concept
### Choose compute target Consider these factors when choosing your compute target:
- * **Choose a local compute**: If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child run), training on your local computer might be a better choice. There is no setup time, the infrastructure resources (your PC or VM) are directly available.
- * **Choose a remote ML compute cluster**: If you are training with larger datasets like in production training creating models which need longer trains, remote compute will provide much better end-to-end time performance because `AutoML` will parallelize trains across the cluster's nodes. On a remote compute, the start-up time for the internal infrastructure will add around 1.5 minutes per child run, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running.
+ * **Choose a local compute**: If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child job), training on your local computer might be a better choice. There is no setup time, the infrastructure resources (your PC or VM) are directly available.
+ * **Choose a remote ML compute cluster**: If you are training with larger datasets like in production training creating models which need longer trains, remote compute will provide much better end-to-end time performance because `AutoML` will parallelize trains across the cluster's nodes. On a remote compute, the start-up time for the internal infrastructure will add around 1.5 minutes per child job, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running.
### Pros and cons
Consider these pros and cons when choosing to use local vs. remote.
| | Pros (Advantages) |Cons (Handicaps) | |||||
-|**Local compute target** | <li> No environment start-up time | <li> Subset of features<li> Can't parallelize runs <li> Worse for large data. <li>No data streaming while training <li> No DNN-based featurization <li> Python SDK only |
-|**Remote ML compute clusters**| <li> Full set of features <li> Parallelize child runs <li> Large data support<li> DNN-based featurization <li> Dynamic scalability of compute cluster on demand <li> No-code experience (web UI) also available | <li> Start-up time for cluster nodes <li> Start-up time for each child run |
+|**Local compute target** | <li> No environment start-up time | <li> Subset of features<li> Can't parallelize jobs <li> Worse for large data. <li>No data streaming while training <li> No DNN-based featurization <li> Python SDK only |
+|**Remote ML compute clusters**| <li> Full set of features <li> Parallelize child jobs <li> Large data support<li> DNN-based featurization <li> Dynamic scalability of compute cluster on demand <li> No-code experience (web UI) also available | <li> Start-up time for cluster nodes <li> Start-up time for each child job |
### Feature availability
More features are available when you use the remote compute, as shown in the tab
| Out-of-the-box GPU support (training and inference) | Γ£ô | | | Image Classification and Labeling support | Γ£ô | | | Auto-ARIMA, Prophet and ForecastTCN models for forecasting | Γ£ô | |
-| Multiple runs/iterations in parallel | Γ£ô | |
+| Multiple jobs/iterations in parallel | Γ£ô | |
| Create models with interpretability in AutoML studio web experience UI | Γ£ô | | | Feature engineering customization in studio web experience UI| Γ£ô | | | Azure ML hyperparameter tuning | Γ£ô | | | Azure ML Pipeline workflow support | Γ£ô | |
-| Continue a run | Γ£ô | |
+| Continue a job | Γ£ô | |
| Forecasting | Γ£ô | Γ£ô | | Create and run experiments in notebooks | Γ£ô | Γ£ô | | Register and visualize experiment's info and metrics in UI | Γ£ô | Γ£ô |
To help confirm that such bias isn't applied to the final recommended model, aut
Learn how to [configure AutoML experiments to use test data (preview) with the SDK](how-to-configure-cross-validation-data-splits.md#provide-test-data-preview) or with the [Azure Machine Learning studio](how-to-use-automated-ml-for-ml-models.md#create-and-run-experiment).
-You can also [test any existing automated ML model (preview)](./v1/how-to-configure-auto-train-v1.md#test-existing-automated-ml-model)), including models from child runs, by providing your own test data or by setting aside a portion of your training data.
+You can also [test any existing automated ML model (preview)](./v1/how-to-configure-auto-train-v1.md#test-existing-automated-ml-model)), including models from child jobs, by providing your own test data or by setting aside a portion of your training data.
## Feature engineering
Enable this setting with:
## <a name="ensemble"></a> Ensemble models
-Automated machine learning supports ensemble models, which are enabled by default. Ensemble learning improves machine learning results and predictive performance by combining multiple models as opposed to using single models. The ensemble iterations appear as the final iterations of your run. Automated machine learning uses both voting and stacking ensemble methods for combining models:
+Automated machine learning supports ensemble models, which are enabled by default. Ensemble learning improves machine learning results and predictive performance by combining multiple models as opposed to using single models. The ensemble iterations appear as the final iterations of your job. Automated machine learning uses both voting and stacking ensemble methods for combining models:
* **Voting**: predicts based on the weighted average of predicted class probabilities (for classification tasks) or predicted regression targets (for regression tasks). * **Stacking**: stacking combines heterogenous models and trains a meta-model based on the output from the individual models. The current default meta-models are LogisticRegression for classification tasks and ElasticNet for regression/forecasting tasks.
machine-learning Concept Compute Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md
The compute resources you use for your compute targets are attached to a [worksp
## <a name="train"></a> Training compute targets
-Azure Machine Learning has varying support across different compute targets. A typical model development lifecycle starts with development or experimentation on a small amount of data. At this stage, use a local environment like your local computer or a cloud-based VM. As you scale up your training on larger datasets or perform [distributed training](how-to-train-distributed-gpu.md), use Azure Machine Learning compute to create a single- or multi-node cluster that autoscales each time you submit a run. You can also attach your own compute resource, although support for different scenarios might vary.
+Azure Machine Learning has varying support across different compute targets. A typical model development lifecycle starts with development or experimentation on a small amount of data. At this stage, use a local environment like your local computer or a cloud-based VM. As you scale up your training on larger datasets or perform [distributed training](how-to-train-distributed-gpu.md), use Azure Machine Learning compute to create a single- or multi-node cluster that autoscales each time you submit a job. You can also attach your own compute resource, although support for different scenarios might vary.
[!INCLUDE [aml-compute-target-train](../../includes/aml-compute-target-train.md)]
-Learn more about how to [submit a training run to a compute target](how-to-set-up-training-targets.md).
+Learn more about how to [submit a training job to a compute target](how-to-set-up-training-targets.md).
## <a name="deploy"></a> Compute targets for inference
When created, these compute resources are automatically part of your workspace,
|Capability |Compute cluster |Compute instance | |||| |Single- or multi-node cluster | **&check;** | Single node cluster |
-|Autoscales each time you submit a run | **&check;** | |
+|Autoscales each time you submit a job | **&check;** | |
|Automatic cluster management and job scheduling | **&check;** | **&check;** | |Support for both CPU and GPU resources | **&check;** | **&check;** |
machine-learning Concept Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-customer-managed-keys.md
Azure Machine Learning is built on top of multiple Azure services. While the dat
In addition to customer-managed keys, Azure Machine Learning also provides a [hbi_workspace flag](/python/api/azureml-core/azureml.core.workspace%28class%29#create-name--auth-none--subscription-id-none--resource-group-none--location-none--create-resource-group-true--sku--basicfriendly-name-none--storage-account-none--key-vault-none--app-insights-none--container-registry-none--cmk-keyvault-none--resource-cmk-uri-none--hbi-workspace-false--default-cpu-compute-target-none--default-gpu-compute-target-none--exist-ok-false--show-output-true-). Enabling this flag reduces the amount of data Microsoft collects for diagnostic purposes and enables [extra encryption in Microsoft-managed environments](../security/fundamentals/encryption-atrest.md). This flag also enables the following behaviors: * Starts encrypting the local scratch disk in your Azure Machine Learning compute cluster, provided you havenΓÇÖt created any previous clusters in that subscription. Else, you need to raise a support ticket to enable encryption of the scratch disk of your compute clusters.
-* Cleans up your local scratch disk between runs.
+* Cleans up your local scratch disk between jobs.
* Securely passes credentials for your storage account, container registry, and SSH account from the execution layer to your compute clusters using your key vault. > [!TIP]
The following resources store metadata for your workspace:
| Service | How itΓÇÖs used | | -- | -- |
-| Azure Cosmos DB | Stores run history data. |
+| Azure Cosmos DB | Stores job history data. |
| Azure Cognitive Search | Stores indices that are used to help query your machine learning content. | | Azure Storage Account | Stores other metadata such as Azure Machine Learning pipelines data. |
Azure Machine Learning uses compute resources to train and deploy machine learni
| Azure Machine Learning compute cluster | OS disk encrypted in Azure Storage with Microsoft-managed keys. Temporary disk is encrypted if the `hbi_workspace` flag is enabled for the workspace. | **Compute cluster**
-The OS disk for each compute node stored in Azure Storage is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. This compute target is ephemeral, and clusters are typically scaled down when no runs are queued. The underlying virtual machine is de-provisioned, and the OS disk is deleted. Azure Disk Encryption isn't supported for the OS disk.
+The OS disk for each compute node stored in Azure Storage is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. This compute target is ephemeral, and clusters are typically scaled down when no jobs are queued. The underlying virtual machine is de-provisioned, and the OS disk is deleted. Azure Disk Encryption isn't supported for the OS disk.
-Each virtual machine also has a local temporary disk for OS operations. If you want, you can use the disk to stage training data. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the temporary disk is encrypted. This environment is short-lived (only during your run) and encryption support is limited to system-managed keys only.
+Each virtual machine also has a local temporary disk for OS operations. If you want, you can use the disk to stage training data. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the temporary disk is encrypted. This environment is short-lived (only during your job) and encryption support is limited to system-managed keys only.
**Compute instance** The OS disk for compute instance is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the local temporary disk on compute instance is encrypted with Microsoft managed keys. Customer managed key encryption isn't supported for OS and temp disk.
machine-learning Concept Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-encryption.md
This process allows you to encrypt both the Data and the OS Disk of the deployed
### Machine Learning Compute **Compute cluster**
-The OS disk for each compute node stored in Azure Storage is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. This compute target is ephemeral, and clusters are typically scaled down when no runs are queued. The underlying virtual machine is de-provisioned, and the OS disk is deleted. Azure Disk Encryption isn't supported for the OS disk.
+The OS disk for each compute node stored in Azure Storage is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. This compute target is ephemeral, and clusters are typically scaled down when no jobs are queued. The underlying virtual machine is de-provisioned, and the OS disk is deleted. Azure Disk Encryption isn't supported for the OS disk.
-Each virtual machine also has a local temporary disk for OS operations. If you want, you can use the disk to stage training data. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the temporary disk is encrypted. This environment is short-lived (only for the duration of your run,) and encryption support is limited to system-managed keys only.
+Each virtual machine also has a local temporary disk for OS operations. If you want, you can use the disk to stage training data. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the temporary disk is encrypted. This environment is short-lived (only for the duration of your job,) and encryption support is limited to system-managed keys only.
**Compute instance** The OS disk for compute instance is encrypted with Microsoft-managed keys in Azure Machine Learning storage accounts. If the workspace was created with the `hbi_workspace` parameter set to `TRUE`, the local temporary disk on compute instance is encrypted with Microsoft managed keys. Customer managed key encryption is not supported for OS and temp disk.
To secure external calls made to the scoring endpoint, Azure Machine Learning us
Microsoft may collect non-user identifying information like resource names (for example the dataset name, or the machine learning experiment name), or job environment variables for diagnostic purposes. All such data is stored using Microsoft-managed keys in storage hosted in Microsoft owned subscriptions and follows [Microsoft's standard Privacy policy and data handling standards](https://privacy.microsoft.com/privacystatement). This data is kept within the same region as your workspace.
-Microsoft also recommends not storing sensitive information (such as account key secrets) in environment variables. Environment variables are logged, encrypted, and stored by us. Similarly when naming [run_id](/python/api/azureml-core/azureml.core.run%28class%29), avoid including sensitive information such as user names or secret project names. This information may appear in telemetry logs accessible to Microsoft Support engineers.
+Microsoft also recommends not storing sensitive information (such as account key secrets) in environment variables. Environment variables are logged, encrypted, and stored by us. Similarly when naming your jobs, avoid including sensitive information such as user names or secret project names. This information may appear in telemetry logs accessible to Microsoft Support engineers.
You may opt out from diagnostic data being collected by setting the `hbi_workspace` parameter to `TRUE` while provisioning the workspace. This functionality is supported when using the AzureML Python SDK, the Azure CLI, REST APIs, or Azure Resource Manager templates.
machine-learning Concept Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-designer.md
Use a visual canvas to build an end-to-end machine learning workflow. Train, tes
+ Drag-and-drop [datasets](#datasets) and [components](#component) onto the canvas. + Connect the components to create a [pipeline draft](#pipeline-draft).
-+ Submit a [pipeline run](#pipeline-run) using the compute resources in your Azure Machine Learning workspace.
++ Submit a [pipeline run](#pipeline-job) using the compute resources in your Azure Machine Learning workspace. + Convert your **training pipelines** to **inference pipelines**. + [Publish](#publish) your pipelines to a REST **pipeline endpoint** to submit a new pipeline that runs with different parameters and datasets. + Publish a **training pipeline** to reuse a single pipeline to train multiple models while changing parameters and datasets.
A valid pipeline has these characteristics:
* All input ports for components must have some connection to the data flow. * All required parameters for each component must be set.
-When you're ready to run your pipeline draft, you submit a pipeline run.
+When you're ready to run your pipeline draft, you submit a pipeline job.
-### Pipeline run
+### Pipeline job
-Each time you run a pipeline, the configuration of the pipeline and its results are stored in your workspace as a **pipeline run**. You can go back to any pipeline run to inspect it for troubleshooting or auditing. **Clone** a pipeline run to create a new pipeline draft for you to edit.
+Each time you run a pipeline, the configuration of the pipeline and its results are stored in your workspace as a **pipeline job**. You can go back to any pipeline job to inspect it for troubleshooting or auditing. **Clone** a pipeline job to create a new pipeline draft for you to edit.
-Pipeline runs are grouped into [experiments](v1/concept-azure-machine-learning-architecture.md#experiments) to organize run history. You can set the experiment for every pipeline run.
+Pipeline jobs are grouped into [experiments](v1/concept-azure-machine-learning-architecture.md#experiments) to organize job history. You can set the experiment for every pipeline job.
## Datasets
To learn how to deploy your model, see [Tutorial: Deploy a machine learning mode
## Publish
-You can also publish a pipeline to a **pipeline endpoint**. Similar to an online endpoint, a pipeline endpoint lets you submit new pipeline runs from external applications using REST calls. However, you cannot send or receive data in real time using a pipeline endpoint.
+You can also publish a pipeline to a **pipeline endpoint**. Similar to an online endpoint, a pipeline endpoint lets you submit new pipeline jobs from external applications using REST calls. However, you cannot send or receive data in real time using a pipeline endpoint.
Published pipelines are flexible, they can be used to train or retrain models, [perform batch inferencing](how-to-run-batch-predictions-designer.md), process new data, and much more. You can publish multiple pipelines to a single pipeline endpoint and specify which pipeline version to run.
machine-learning Concept Enterprise Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-enterprise-security.md
Here's the authentication process for Azure Machine Learning using multi-factor
1. The client signs in to Azure AD and gets an Azure Resource Manager token. 1. The client presents the token to Azure Resource Manager and to all Azure Machine Learning.
-1. Azure Machine Learning provides a Machine Learning service token to the user compute target (for example, Azure Machine Learning compute cluster). This token is used by the user compute target to call back into the Machine Learning service after the run is complete. The scope is limited to the workspace.
+1. Azure Machine Learning provides a Machine Learning service token to the user compute target (for example, Azure Machine Learning compute cluster). This token is used by the user compute target to call back into the Machine Learning service after the job is complete. The scope is limited to the workspace.
[![Authentication in Azure Machine Learning](media/concept-enterprise-security/authentication.png)](media/concept-enterprise-security/authentication.png#lightbox)
machine-learning Concept Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-environments.md
Last updated 09/23/2021
# What are Azure Machine Learning environments?
-Azure Machine Learning environments are an encapsulation of the environment where your machine learning training happens. They specify the Python packages, environment variables, and software settings around your training and scoring scripts. They also specify run times (Python, Spark, or Docker). The environments are managed and versioned entities within your Machine Learning workspace that enable reproducible, auditable, and portable machine learning workflows across a variety of compute targets.
+Azure Machine Learning environments are an encapsulation of the environment where your machine learning training happens. They specify the Python packages, environment variables, and software settings around your training and scoring scripts. They also specify runtimes (Python, Spark, or Docker). The environments are managed and versioned entities within your Machine Learning workspace that enable reproducible, auditable, and portable machine learning workflows across a variety of compute targets.
You can use an `Environment` object on your local compute to: * Develop your training script.
You can use an `Environment` object on your local compute to:
* Deploy your model with that same environment. * Revisit the environment in which an existing model was trained.
-The following diagram illustrates how you can use a single `Environment` object in both your run configuration (for training) and your inference and deployment configuration (for web service deployments).
+The following diagram illustrates how you can use a single `Environment` object in both your job configuration (for training) and your inference and deployment configuration (for web service deployments).
![Diagram of an environment in machine learning workflow](./media/concept-environments/ml-environment.png)
-The environment, compute target and training script together form the run configuration: the full specification of a training run.
+The environment, compute target and training script together form the job configuration: the full specification of a training job.
## Types of environments
For code samples, see the "Manage environments" section of [How to use environme
## Environment building, caching, and reuse
-Azure Machine Learning builds environment definitions into Docker images and conda environments. It also caches the environments so they can be reused in subsequent training runs and service endpoint deployments. Running a training script remotely requires the creation of a Docker image, but a local run can use a conda environment directly.
+Azure Machine Learning builds environment definitions into Docker images and conda environments. It also caches the environments so they can be reused in subsequent training jobs and service endpoint deployments. Running a training script remotely requires the creation of a Docker image, but a local job can use a conda environment directly.
-### Submitting a run using an environment
+### Submitting a job using an environment
-When you first submit a remote run using an environment, the Azure Machine Learning service invokes an [ACR Build Task](../container-registry/container-registry-tasks-overview.md) on the Azure Container Registry (ACR) associated with the Workspace. The built Docker image is then cached on the Workspace ACR. Curated environments are backed by Docker images that are cached in Global ACR. At the start of the run execution, the image is retrieved by the compute target from the relevant ACR.
+When you first submit a remote job using an environment, the Azure Machine Learning service invokes an [ACR Build Task](../container-registry/container-registry-tasks-overview.md) on the Azure Container Registry (ACR) associated with the Workspace. The built Docker image is then cached on the Workspace ACR. Curated environments are backed by Docker images that are cached in Global ACR. At the start of the job execution, the image is retrieved by the compute target from the relevant ACR.
-For local runs, a Docker or conda environment is created based on the environment definition. The scripts are then executed on the target compute - a local runtime environment or local Docker engine.
+For local jobs, a Docker or conda environment is created based on the environment definition. The scripts are then executed on the target compute - a local runtime environment or local Docker engine.
### Building environments as Docker images
The second step is omitted if you specify [user-managed dependencies](/python/ap
### Image caching and reuse
-If you use the same environment definition for another run, Azure Machine Learning reuses the cached image from the Workspace ACR to save time.
+If you use the same environment definition for another job, Azure Machine Learning reuses the cached image from the Workspace ACR to save time.
To view the details of a cached image, check the Environments page in Azure Machine Learning studio or use the [`Environment.get_image_details`](/python/api/azureml-core/azureml.core.environment.environment#get-image-details-workspace-) method.
The hash isn't affected by the environment name or version. If you rename your e
> [!NOTE] > You will not be able to submit any local changes to a curated environment without changing the name of the environment. The prefixes "AzureML-" and "Microsoft" are reserved exclusively for curated environments, and your job submission will fail if the name starts with either of them.
-The environment's computed hash value is compared with those in the Workspace and global ACR, or on the compute target (local runs only). If there is a match then the cached image is pulled and used, otherwise an image build is triggered.
+The environment's computed hash value is compared with those in the Workspace and global ACR, or on the compute target (local jobs only). If there is a match then the cached image is pulled and used, otherwise an image build is triggered.
The following diagram shows three environment definitions. Two of them have different names and versions but identical base images and Python packages, which results in the same hash and corresponding cached image. The third environment has different Python packages and versions, leading to a different hash and cached image.
machine-learning Concept Manage Ml Pitfalls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-manage-ml-pitfalls.md
Automated ML also implements explicit **model complexity limitations** to preven
Imbalanced data is commonly found in data for machine learning classification scenarios, and refers to data that contains a disproportionate ratio of observations in each class. This imbalance can lead to a falsely perceived positive effect of a model's accuracy, because the input data has bias towards one class, which results in the trained model to mimic that bias.
-In addition, automated ML runs generate the following charts automatically, which can help you understand the correctness of the classifications of your model, and identify models potentially impacted by imbalanced data.
+In addition, automated ML jobs generate the following charts automatically, which can help you understand the correctness of the classifications of your model, and identify models potentially impacted by imbalanced data.
Chart| Description |
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
Azure Machine Learning only uses MLflow Tracking for metric logging and artifact
> [!NOTE] > Unlike the Azure Machine Learning SDK v1, there is no logging functionality in the SDK v2 (preview), and it is recommended to use MLflow for logging and tracking.
-[MLflow](https://www.mlflow.org) is an open-source library for managing the lifecycle of your machine learning experiments. MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api) is a component of MLflow that logs and tracks your training run metrics and model artifacts, no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine or an Azure Machine Learning compute instance.
+[MLflow](https://www.mlflow.org) is an open-source library for managing the lifecycle of your machine learning experiments. MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api) is a component of MLflow that logs and tracks your training job metrics and model artifacts, no matter your experiment's environment--locally on your computer, on a remote compute target, a virtual machine or an Azure Machine Learning compute instance.
## Track experiments
You can [Deploy MLflow models to an online endpoint](how-to-deploy-mlflow-models
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
-You can use MLflow's tracking URI and logging API, collectively known as MLflow Tracking, to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning backend support (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud like via an [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
+You can use MLflow's tracking URI and logging API, collectively known as MLflow Tracking, to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning backend support (preview). You can submit jobs locally with Azure Machine Learning tracking or migrate your jobs to the cloud like via an [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
Learn more at [Train ML models with MLflow projects and Azure Machine Learning (preview)](how-to-train-mlflow-projects.md).
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
Machine Learning gives you the capability to track the end-to-end audit trail of
- Machine Learning [integrates with Git](how-to-set-up-training-targets.md#gitintegration) to track information on which repository, branch, and commit your code came from. - [Machine Learning datasets](how-to-create-register-datasets.md) help you track, profile, and version data. - [Interpretability](how-to-machine-learning-interpretability.md) allows you to explain your models, meet regulatory compliance, and understand how models arrive at a result for specific input.-- Machine Learning Run history stores a snapshot of the code, data, and computes used to train a model.
+- Machine Learning Job history stores a snapshot of the code, data, and computes used to train a model.
- The Machine Learning Model Registry captures all the metadata associated with your model. For example, metadata includes which experiment trained it, where it's being deployed, and if its deployments are healthy.-- [Integration with Azure](how-to-use-event-grid.md) allows you to act on events in the machine learning lifecycle. Examples are model registration, deployment, data drift, and training (run) events.
+- [Integration with Azure](how-to-use-event-grid.md) allows you to act on events in the machine learning lifecycle. Examples are model registration, deployment, data drift, and training (job) events.
> [!TIP] > While some information on models and datasets is automatically captured, you can add more information by using _tags_. When you look for registered models and datasets in your workspace, you can use tags as a filter.
A theme of the preceding steps is that your retraining should be automated, not
## Automate the machine learning lifecycle
-You can use GitHub and Azure Pipelines to create a continuous integration process that trains a model. In a typical scenario, when a data scientist checks a change into the Git repo for a project, Azure Pipelines starts a training run. The results of the run can then be inspected to see the performance characteristics of the trained model. You can also create a pipeline that deploys the model as a web service.
+You can use GitHub and Azure Pipelines to create a continuous integration process that trains a model. In a typical scenario, when a data scientist checks a change into the Git repo for a project, Azure Pipelines starts a training job. The results of the job can then be inspected to see the performance characteristics of the trained model. You can also create a pipeline that deploys the model as a web service.
The [Machine Learning extension](https://marketplace.visualstudio.com/items?itemName=ms-air-aiagility.vss-services-azureml) makes it easier to work with Azure Pipelines. It provides the following enhancements to Azure Pipelines:
machine-learning Concept Plan Manage Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-plan-manage-cost.md
Use the following tips to help you manage and optimize your compute resource cos
- Configure your training clusters for autoscaling - Set quotas on your subscription and workspaces-- Set termination policies on your training run
+- Set termination policies on your training job
- Use low-priority virtual machines (VM) - Schedule compute instances to shut down and start up automatically - Use an Azure Reserved VM Instance
machine-learning Concept Responsible Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ml.md
Azure Machine LearningΓÇÖs [Responsible AI scorecard](./how-to-responsible-ai-sc
The ML platform also enables decision-making by informing model-driven and data-driven business decisions: -- Data-driven insights to further understand heterogeneous treatment effects on an outcome, using historic data only. For example, ΓÇ£how would a medicine impact a patientΓÇÖs blood pressure?". Such insights are provided through the[Causal Inference](concept-causal-inference.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- Data-driven insights to further understand heterogeneous treatment effects on an outcome, using historic data only. For example, ΓÇ£how would a medicine impact a patientΓÇÖs blood pressure?". Such insights are provided through the [Causal Inference](concept-causal-inference.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
- Model-driven insights, to answer end-usersΓÇÖ questions such as ΓÇ£what can I do to get a different outcome from your AI next time?ΓÇ¥ to inform their actions. Such insights are provided to data scientists through the [Counterfactual What-If](concept-counterfactual-analysis.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md). ## Next steps
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md
Azure Machine Learning provides several ways to train your models, from code-fir
| Training method | Description | | -- | -- |
- | [Run configuration](#run-configuration) | A **typical way to train models** is to use a training script and run configuration. The run configuration provides the information needed to configure the training environment used to train your model. You can specify your training script, compute target, and Azure ML environment in your run configuration and run a training job. |
- | [Automated machine learning](#automated-machine-learning) | Automated machine learning allows you to **train models without extensive data science or programming knowledge**. For people with a data science and programming background, it provides a way to save time and resources by automating algorithm selection and hyperparameter tuning. You don't have to worry about defining a run configuration when using automated machine learning. |
+ | [Run configuration](#run-configuration) | A **typical way to train models** is to use a training script and job configuration. The job configuration provides the information needed to configure the training environment used to train your model. You can specify your training script, compute target, and Azure ML environment in your job configuration and run a training job. |
+ | [Automated machine learning](#automated-machine-learning) | Automated machine learning allows you to **train models without extensive data science or programming knowledge**. For people with a data science and programming background, it provides a way to save time and resources by automating algorithm selection and hyperparameter tuning. You don't have to worry about defining a job configuration when using automated machine learning. |
| [Machine learning pipeline](#machine-learning-pipeline) | Pipelines are not a different training method, but a **way of defining a workflow using modular, reusable steps**, that can include training as part of the workflow. Machine learning pipelines support using automated machine learning and run configuration to train models. Since pipelines are not focused specifically on training, the reasons for using a pipeline are more varied than the other training methods. Generally, you might use a pipeline when:<br>* You want to **schedule unattended processes** such as long running training jobs or data preparation.<br>* Use **multiple steps** that are coordinated across heterogeneous compute resources and storage locations.<br>* Use the pipeline as a **reusable template** for specific scenarios, such as retraining or batch scoring.<br>* **Track and version data sources, inputs, and outputs** for your workflow.<br>* Your workflow is **implemented by different teams that work on specific steps independently**. Steps can then be joined together in a pipeline to implement the workflow. | + **Designer**: Azure Machine Learning designer provides an easy entry-point into machine learning for building proof of concepts, or for users with little coding experience. It allows you to train models using a drag and drop web-based UI. You can use Python code as part of the design, or train models without writing any code.
-+ **Azure CLI**: The machine learning CLI provides commands for common tasks with Azure Machine Learning, and is often used for **scripting and automating tasks**. For example, once you've created a training script or pipeline, you might use the Azure CLI to start a training run on a schedule or when the data files used for training are updated. For training models, it provides commands that submit training jobs. It can submit jobs using run configurations or pipelines.
++ **Azure CLI**: The machine learning CLI provides commands for common tasks with Azure Machine Learning, and is often used for **scripting and automating tasks**. For example, once you've created a training script or pipeline, you might use the Azure CLI to start a training job on a schedule or when the data files used for training are updated. For training models, it provides commands that submit training jobs. It can submit jobs using run configurations or pipelines. Each of these training methods can use different types of compute resources for training. Collectively, these resources are referred to as [__compute targets__](v1/concept-azure-machine-learning-architecture.md#compute-targets). A compute target can be a local machine or a cloud resource, such as an Azure Machine Learning Compute, Azure HDInsight, or a remote virtual machine.
machine-learning Concept Train Model Git Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-model-git-integration.md
Title: Git integration for Azure Machine Learning
-description: Learn how Azure Machine Learning integrates with a local Git repository to track repository, branch, and current commit information as part of a training run.
+description: Learn how Azure Machine Learning integrates with a local Git repository to track repository, branch, and current commit information as part of a training job.
SSH displays this fingerprint when it connects to an unknown host to protect you
## Track code that comes from Git repositories
-When you submit a training run from the Python SDK or Machine Learning CLI, the files needed to train the model are uploaded to your workspace. If the `git` command is available on your development environment, the upload process uses it to check if the files are stored in a git repository. If so, then information from your git repository is also uploaded as part of the training run. This information is stored in the following properties for the training run:
+When you submit a training job from the Python SDK or Machine Learning CLI, the files needed to train the model are uploaded to your workspace. If the `git` command is available on your development environment, the upload process uses it to check if the files are stored in a git repository. If so, then information from your git repository is also uploaded as part of the training job. This information is stored in the following properties for the training job:
| Property | Git command used to get the value | Description | | -- | -- | -- | | `azureml.git.repository_uri` | `git ls-remote --get-url` | The URI that your repository was cloned from. | | `mlflow.source.git.repoURL` | `git ls-remote --get-url` | The URI that your repository was cloned from. |
-| `azureml.git.branch` | `git symbolic-ref --short HEAD` | The active branch when the run was submitted. |
-| `mlflow.source.git.branch` | `git symbolic-ref --short HEAD` | The active branch when the run was submitted. |
-| `azureml.git.commit` | `git rev-parse HEAD` | The commit hash of the code that was submitted for the run. |
-| `mlflow.source.git.commit` | `git rev-parse HEAD` | The commit hash of the code that was submitted for the run. |
+| `azureml.git.branch` | `git symbolic-ref --short HEAD` | The active branch when the job was submitted. |
+| `mlflow.source.git.branch` | `git symbolic-ref --short HEAD` | The active branch when the job was submitted. |
+| `azureml.git.commit` | `git rev-parse HEAD` | The commit hash of the code that was submitted for the job. |
+| `mlflow.source.git.commit` | `git rev-parse HEAD` | The commit hash of the code that was submitted for the job. |
| `azureml.git.dirty` | `git status --porcelain .` | `True`, if the branch/commit is dirty; otherwise, `false`. |
-This information is sent for runs that use an estimator, machine learning pipeline, or script run.
+This information is sent for jobs that use an estimator, machine learning pipeline, or script run.
If your training files are not located in a git repository on your development environment, or the `git` command is not available, then no git-related information is tracked.
If your training files are not located in a git repository on your development e
## View the logged information
-The git information is stored in the properties for a training run. You can view this information using the Azure portal or Python SDK.
+The git information is stored in the properties for a training job. You can view this information using the Azure portal or Python SDK.
### Azure portal 1. From the [studio portal](https://ml.azure.com), select your workspace.
-1. Select __Experiments__, and then select one of your experiments.
-1. Select one of the runs from the __RUN NUMBER__ column.
+1. Select __Jobs__, and then select one of your experiments.
+1. Select one of the jobs from the __Display name__ column.
1. Select __Outputs + logs__, and then expand the __logs__ and __azureml__ entries. Select the link that begins with __###\_azure__. The logged information contains text similar to the following JSON:
machine-learning Concept Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-vulnerability-management.md
Next to the regular release cadence, hot fixes are applied in the case vulnerabi
> [!NOTE] > The host OS is not the OS version you might specify for an [environment](how-to-use-environments.md) when training or deploying a model. Environments run inside Docker. Docker runs on the host OS.
-## Microsoft-managed container images
+## Microsoft-managed container images
[Base docker images](https://github.com/Azure/AzureML-Containers) maintained by Azure Machine Learning get security patches frequently to address newly discovered vulnerabilities.
machine-learning Dsvm Tools Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-development.md
keywords: data science tools, data science virtual machine, tools for data scien
--++ Previously updated : 05/12/2021 Last updated : 06/23/2022 # Development tools on the Azure Data Science Virtual Machine
The Data Science Virtual Machine (DSVM) bundles several popular tools in a highl
| Typical uses | Code editor and Git integration | | How to use and run it | Desktop shortcut (`C:\Program Files (x86)\Microsoft VS Code\Code.exe`) in Windows, desktop shortcut or terminal (`code`) in Linux |
-## RStudio Desktop
-
-| Category | Value |
-|--|--|
-| What is it? | Client IDE for R language |
-| Supported DSVM versions | Windows, Linux |
-| Typical uses | R development |
-| How to use and run it | Desktop shortcut (`C:\Program Files\RStudio\bin\rstudio.exe`) on Windows, desktop shortcut (`/usr/bin/rstudio`) on Linux |
- ## PyCharm | Category | Value |
machine-learning Dsvm Tools Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-languages.md
--++ Previously updated : 05/12/2021 Last updated : 06/23/2022
artificial intelligence (AI) applications. Here are some of the notable ones.
Open a command prompt and type `R`.
-* Use in an IDE:
-
- To edit R scripts in an IDE, you can use RStudio, which is installed on the DSVM images by default.
- * Use in Jupyter Lab Open a Launcher tab in Jupyter Lab and select the type and kernel of your new document. If you want your document to be
artificial intelligence (AI) applications. Here are some of the notable ones.
* Install R packages:
- You can install new R packages either by using the `install.packages()` function or by using RStudio.
+ You can install new R packages by using the `install.packages()` function.
## Julia
machine-learning Linux Dsvm Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/linux-dsvm-walkthrough.md
description: Learn how to complete several common data science tasks by using th
--++ Previously updated : 05/10/2021 Last updated : 06/23/2022
To get copies of the code samples that are used in this walkthrough, use git to
git clone https://github.com/Azure/Azure-MachineLearning-DataScience.git ```
-Open a terminal window and start a new R session in the R interactive console. You also can use RStudio, which is preinstalled on the DSVM.
-
+Open a terminal window and start a new R session in the R interactive console.
To import the data and set up the environment: ```R
select top 10 spam, char_freq_dollar from spam;
GO ```
-You can also query by using SQuirreL SQL. Follow steps similar to PostgreSQL by using the SQL Server JDBC driver. The JDBC driver is in the /usr/share/java/jdbcdrivers/sqljdbc42.jar folder.
+You can also query by using SQuirreL SQL. Follow steps similar to PostgreSQL by using the SQL Server JDBC driver. The JDBC driver is in the /usr/share/java/jdbcdrivers/sqljdbc42.jar folder.
machine-learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/overview.md
keywords: data science tools, data science virtual machine, tools for data scien
--++ Previously updated : 04/02/2020 Last updated : 06/23/2022
The key differences between these two product offerings are detailed below:
|Built-in<br>Hosted Notebooks | No<br>(requires additional configuration) | Yes | |Built-in SSO | No <br>(requires additional configuration) | Yes | |Built-in Collaboration | No | Yes |
-|Pre-installed Tools | Jupyter(lab), RStudio Server, VSCode,<br> Visual Studio, PyCharm, Juno,<br>Power BI Desktop, SSMS, <br>Microsoft Office 365, Apache Drill | Jupyter(lab)<br> RStudio Server |
+|Pre-installed Tools | Jupyter(lab), VSCode,<br> Visual Studio, PyCharm, Juno,<br>Power BI Desktop, SSMS, <br>Microsoft Office 365, Apache Drill | Jupyter(lab) |
## Sample use cases
machine-learning Reference Ubuntu Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/reference-ubuntu-vm.md
Title: 'Reference: Ubuntu Data Science Virtual Machine' description: Details on tools included in the Ubuntu Data Science Virtual Machine-+ - Previously updated : 05/12/2021+ Last updated : 06/23/2022
easier to troubleshoot issues, compared to developing on a Spark cluster.
## IDEs and editors
-You have a choice of several code editors, including VS.Code, PyCharm, RStudio, IntelliJ, vi/Vim, Emacs.
+You have a choice of several code editors, including VS.Code, PyCharm, IntelliJ, vi/Vim, Emacs.
-VS.Code, PyCharm, RStudio, and IntelliJ are graphical editors. To use them, you need to be signed in to a graphical
+VS.Code, PyCharm, and IntelliJ are graphical editors. To use them, you need to be signed in to a graphical
desktop. You open them by using desktop and application menu shortcuts. Vim and Emacs are text-based editors. On Emacs, the ESS add-on package makes working with R easier within the Emacs
You can exit Rattle and R. Now you can modify the generated R script. Or, use th
## Next steps
-Have additional questions? Consider creating a [support ticket](https://azure.microsoft.com/support/create-ticket/).
+Have additional questions? Consider creating a [support ticket](https://azure.microsoft.com/support/create-ticket/).
machine-learning Tools Included https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/tools-included.md
keywords: data science tools, data science virtual machine, tools for data scien
--++ Previously updated : 05/12/2021 Last updated : 06/23/2022
The Data Science Virtual Machine comes with the most useful data-science tools p
| [Nano](https://www.nano-editor.org/) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span></br> | <span class='red-x'>&#10060;</span></br> | | | [Visual Studio 2019 Community Edition](https://www.visualstudio.com/community/) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | [Visual Studio on the DSVM](dsvm-tools-development.md#visual-studio-community-edition) | | [Visual Studio Code](https://code.visualstudio.com/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [Visual Studio Code on the DSVM](./dsvm-tools-development.md#visual-studio-code) |
-| [RStudio Desktop](https://www.rstudio.com/products/rstudio/#Desktop) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [RStudio Desktop on the DSVM](./dsvm-tools-development.md#rstudio-desktop) |
-| [RStudio Server](https://www.rstudio.com/products/rstudio/#Server) <br/> (disabled by default) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | |
| [PyCharm Community Edition](https://www.jetbrains.com/pycharm/) | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [PyCharm on the DSVM](./dsvm-tools-development.md#pycharm) | | [IntelliJ IDEA](https://www.jetbrains.com/idea/) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | | | [Vim](https://www.vim.org) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | |
machine-learning Vm Do Ten Things https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/vm-do-ten-things.md
--++ Previously updated : 05/08/2020 Last updated : 06/23/2022
Last updated 05/08/2020
The Windows Data Science Virtual Machine (DSVM) is a powerful data science development environment where you can perform data exploration and modeling tasks. The environment comes already built and bundled with several popular data analytics tools that make it easy to get started with your analysis for on-premises, cloud, or hybrid deployments.
-The DSVM works closely with Azure services. It can read and process data that's already stored on Azure, in Azure Synapse (formerly SQL DW),Azure Data Lake, Azure Storage, or Azure Cosmos DB. It can also take advantage of other analytics tools, such as Azure Machine Learning.
+The DSVM works closely with Azure services. It can read and process data that's already stored on Azure, in Azure Synapse (formerly SQL DW), Azure Data Lake, Azure Storage, or Azure Cosmos DB. It can also take advantage of other analytics tools, such as Azure Machine Learning.
In this article, you'll learn how to use your DSVM to perform data science tasks and interact with other Azure services. Here are some of the things you can do on the DSVM:
When you're in the notebook, you can explore your data, build the model, and tes
You can use languages like R and Python to do your data analytics right on the DSVM.
-For R, you can use an IDE like RStudio that can be found on the start menu or on the desktop. Or you can use R Tools for Visual Studio. Microsoft has provided additional libraries on top of the open-source CRAN R to enable scalable analytics and the ability to analyze data larger than the memory size allowed in parallel chunked analysis.
+For R, you can use R Tools for Visual Studio. Microsoft has provided additional libraries on top of the open-source CRAN R to enable scalable analytics and the ability to analyze data larger than the memory size allowed in parallel chunked analysis.
For Python, you can use an IDE like Visual Studio Community Edition, which has the Python Tools for Visual Studio (PTVS) extension pre-installed. By default, only Python 3.6, the root Conda environment, is configured on PTVS. To enable Anaconda Python 2.7, take the following steps:
machine-learning How To Cicd Data Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-cicd-data-ingestion.md
steps:
artifact: di-notebooks ```
-The pipeline uses [flake8](https://pypi.org/project/flake8/) to do the Python code linting. It runs the unit tests defined in the source code and publishes the linting and test results so they're available in the Azure Pipeline execution screen:
-
-![linting unit tests](media/how-to-cicd-data-ingestion/linting-unit-tests.png)
+The pipeline uses [flake8](https://pypi.org/project/flake8/) to do the Python code linting. It runs the unit tests defined in the source code and publishes the linting and test results so they're available in the Azure Pipeline execution screen.
If the linting and unit testing is successful, the pipeline will copy the source code to the artifact repository to be used by the subsequent deployment steps.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
For each compute instance in a workspace that you created (or that was created f
-[Azure RBAC](../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, and RStudio on that compute instance. A compute instance is dedicated to a single user who has root access, and can terminal in through Jupyter/JupyterLab/RStudio. Compute instance will have single-user sign-in and all actions will use that userΓÇÖs identity for Azure RBAC and attribution of experiment runs. SSH access is controlled through public/private key mechanism.
+[Azure RBAC](../role-based-access-control/overview.md) allows you to control which users in the workspace can create, delete, start, stop, restart a compute instance. All users in the workspace contributor and owner role can create, delete, start, stop, and restart compute instances across the workspace. However, only the creator of a specific compute instance, or the user assigned if it was created on their behalf, is allowed to access Jupyter, JupyterLab, and RStudio on that compute instance. A compute instance is dedicated to a single user who has root access, and can terminal in through Jupyter/JupyterLab/RStudio. Compute instance will have single-user sign-in and all actions will use that userΓÇÖs identity for Azure RBAC and attribution of experiment jobs. SSH access is controlled through public/private key mechanism.
These actions can be controlled by Azure RBAC: * *Microsoft.MachineLearningServices/workspaces/computes/read*
To create a compute instance, you'll need permissions for the following actions:
* [Access the compute instance terminal](how-to-access-terminal.md) * [Create and manage files](how-to-manage-files.md) * [Update the compute instance to the latest VM image](concept-vulnerability-management.md#compute-instance)
-* [Submit a training run](how-to-set-up-training-targets.md)
+* [Submit a training job](how-to-set-up-training-targets.md)
machine-learning How To Create Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-workspace-template.md
When using a customer-managed key, Azure Machine Learning creates a secondary re
An additional configuration you can provide for your data is to set the **confidential_data** parameter to **true**. Doing so, does the following: * Starts encrypting the local scratch disk for Azure Machine Learning compute clusters, providing you have not created any previous clusters in your subscription. If you have previously created a cluster in the subscription, open a support ticket to have encryption of the scratch disk enabled for your compute clusters.
-* Cleans up the local scratch disk between runs.
+* Cleans up the local scratch disk between jobs.
* Securely passes credentials for the storage account, container registry, and SSH account from the execution layer to your compute clusters by using key vault. * Enables IP filtering to ensure the underlying batch pools cannot be called by any external services other than AzureMachineLearningService.
To avoid this problem, we recommend one of the following approaches:
After these changes, you can specify the ID of the existing Key Vault resource when running the template. The template will then reuse the Key Vault by setting the `keyVault` property of the workspace to its ID.
- To get the ID of the Key Vault, you can reference the output of the original template run or use the Azure CLI. The following command is an example of using the Azure CLI to get the Key Vault resource ID:
+ To get the ID of the Key Vault, you can reference the output of the original template job or use the Azure CLI. The following command is an example of using the Azure CLI to get the Key Vault resource ID:
```azurecli az keyvault show --name mykeyvault --resource-group myresourcegroup --query id
machine-learning How To Data Ingest Adf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-data-ingest-adf.md
#Customer intent: As an experienced data engineer, I need to create a production data ingestion pipeline for the data used to train my models. + # Data ingestion with Azure Data Factory In this article, you learn about the available options for building a data ingestion pipeline with [Azure Data Factory](../data-factory/introduction.md). This Azure Data Factory pipeline is used to ingest data for use with [Azure Machine Learning](overview-what-is-azure-machine-learning.md). Data Factory allows you to easily extract, transform, and load (ETL) data. Once the data has been transformed and loaded into storage, it can be used to train your machine learning models in Azure Machine Learning.
The function is invoked with the [Azure Data Factory Azure Function activity](..
## Azure Data Factory with Custom Component activity
-In this option, the data is processed with custom Python code wrapped into an executable. It is invoked with an [ Azure Data Factory Custom Component activity](../data-factory/transform-data-using-dotnet-custom-activity.md). This approach is a better fit for large data than the previous technique.
+In this option, the data is processed with custom Python code wrapped into an executable. It is invoked with an [Azure Data Factory Custom Component activity](../data-factory/transform-data-using-dotnet-custom-activity.md). This approach is a better fit for large data than the previous technique.
![Diagram shows an Azure Data Factory pipeline, with a custom component and Run M L Pipeline, and an Azure Machine Learning pipeline, with Train Model, and how they interact with raw data and prepared data.](media/how-to-data-ingest-adf/adf-customcomponent.png)
This method is recommended for [Machine Learning Operations (MLOps) workflows](c
Each time the Data Factory pipeline runs, 1. The data is saved to a different location in storage.
-1. To pass the location to Azure Machine Learning, the Data Factory pipeline calls an [Azure Machine Learning pipeline](concept-ml-pipelines.md). When calling the ML pipeline, the data location and run ID are sent as parameters.
+1. To pass the location to Azure Machine Learning, the Data Factory pipeline calls an [Azure Machine Learning pipeline](concept-ml-pipelines.md). When calling the ML pipeline, the data location and job ID are sent as parameters.
1. The ML pipeline can then create an Azure Machine Learning datastore and dataset with the data location. Learn more in [Execute Azure Machine Learning pipelines in Data Factory](../data-factory/transform-data-machine-learning-service.md). ![Diagram shows an Azure Data Factory pipeline and an Azure Machine Learning pipeline and how they interact with raw data and prepared data. The Data Factory pipeline feeds data to the Prepared Data database, which feeds a data store, which feeds datasets in the Machine Learning workspace.](media/how-to-data-ingest-adf/aml-dataset.png)
Each time the Data Factory pipeline runs,
Once the data is accessible through a datastore or dataset, you can use it to train an ML model. The training process might be part of the same ML pipeline that is called from ADF. Or it might be a separate process such as experimentation in a Jupyter notebook.
-Since datasets support versioning, and each run from the pipeline creates a new version, it's easy to understand which version of the data was used to train a model.
+Since datasets support versioning, and each job from the pipeline creates a new version, it's easy to understand which version of the data was used to train a model.
### Read data directly from storage
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
Once your environment is set up, use the VS Code debugger to test and debug your
- To debug startup behavior, place your breakpoint(s) inside the `init` function. - To debug scoring behavior, place your breakpoint(s) inside the `run` function.
-1. Select the VS Code Run view.
+1. Select the VS Code Job view.
1. In the Run and Debug dropdown, select **Azure ML: Debug Local Endpoint** to start debugging your endpoint locally. In the **Breakpoints** section of the Run view, check that:
machine-learning How To Debug Parallel Run Step https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-parallel-run-step.md
Last updated 10/21/2021
#Customer intent: As a data scientist, I want to figure out why my ParallelRunStep doesn't run so that I can fix it. + # Troubleshooting the ParallelRunStep [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-and-where.md
adobe-target: true + # Deploy machine learning models to Azure [!INCLUDE [sdk & cli v1](../../includes/machine-learning-dev-v1.md)]
Set `-p` to the path of a folder or a file that you want to register.
For more information on `az ml model register`, see the [reference documentation](/cli/azure/ml(v1)/model).
-### Register a model from an Azure ML training run
+### Register a model from an Azure ML training job
If you need to register a model that was created previously through an Azure Machine Learning training job, you can specify the experiment, run, and path to the model:
To include multiple files in the model registration, set `model_path` to the pat
For more information, see the documentation for the [Model class](/python/api/azureml-core/azureml.core.model.model).
-### Register a model from an Azure ML training run
+### Register a model from an Azure ML training job
When you use the SDK to train a model, you can receive either a [Run](/python/api/azureml-core/azureml.core.run.run) object or an [AutoMLRun](/python/api/azureml-train-automl-client/azureml.train.automl.run.automlrun) object, depending on how you trained the model. Each object can be used to register a model created by an experiment run.
machine-learning How To Deploy Fpga Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-fpga-web-service.md
Next, create a Docker image from the converted model and all dependencies. This
#### Deploy to a local edge server
-All [Azure Data Box Edge devices](../databox-online/azure-stack-edge-overview.md
-) contain an FPGA for running the model. Only one model can be running on the FPGA at one time. To run a different model, just deploy a new container. Instructions and sample code can be found in [this Azure Sample](https://github.com/Azure-Samples/aml-hardware-accelerated-models).
+All [Azure Data Box Edge devices](../databox-online/azure-stack-edge-overview.md) contain an FPGA for running the model. Only one model can be running on the FPGA at one time. To run a different model, just deploy a new container. Instructions and sample code can be found in [this Azure Sample](https://github.com/Azure-Samples/aml-hardware-accelerated-models).
### Consume the deployed model
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
+
+ Title: Deploy MLflow models
+
+description: Learn to deploy your MLflow model to the deployment targets supported by Azure.
+++++ Last updated : 06/06/2022++
+ms.devlang: azurecli
++
+# Deploy MLflow models
+++
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning CLI extension you are using:"]
+> * [v1](./v1/how-to-deploy-mlflow-models.md)
+> * [v2 (current version)](how-to-deploy-mlflow-models-online-endpoints.md)
+
+In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to Azure ML for both real-time and batch inference. Azure ML supports no-code deployment of models created and logged with MLflow. This means that you don't have to provide a scoring script or an environment. Those models can be deployed to ACI (Azure Container Instances), AKS (Azure Kubernetes Services) or our managed inference services (referred as MIR).
+
+For no-code-deployment, Azure Machine Learning
+
+* Dynamically installs Python packages provided in the `conda.yaml` file, this means the dependencies are installed during container runtime.
+ * The base container image/curated environment used for dynamic installation is `mcr.microsoft.com/azureml/mlflow-ubuntu18.04-py37-cpu-inference` or `AzureML-mlflow-ubuntu18.04-py37-cpu-inference`
+* Provides a MLflow base image/curated environment that contains the following items:
+ * [`azureml-inference-server-http`](how-to-inference-server-http.md)
+ * [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst)
+ * `pandas`
+ * The scoring script baked into the image.
+
+## Supported targets for MLflow models
+
+The following table shows the target support for MLflow models in Azure ML:
++
+| Scenario | Azure Container Instance | Azure Kubernetes | Managed Inference |
+| :- | :-: | :-: | :-: |
+| Deploying models logged with MLflow to real time inference | **&check;**<sup>1</sup> | **&check;**<sup>1</sup> | **&check;**<sup>1</sup> |
+| Deploying models logged with MLflow to batch inference | <sup>2</sup> | <sup>2</sup> | **&check;** |
+| Deploying models with ColSpec signatures | **&check;**<sup>4</sup> | **&check;**<sup>4</sup> | **&check;**<sup>4</sup> |
+| Deploying models with TensorSpec signatures | **&check;**<sup>5</sup> | **&check;**<sup>5</sup> | **&check;**<sup>5</sup> |
+| Run models logged with MLflow in you local compute with Azure ML CLI v2 | **&check;** | **&check;** | <sup>3</sup> |
+| Debug online endpoints locally in Visual Studio Code (preview) | | | |
+
+> [!NOTE]
+> - <sup>1</sup> Spark flavor is not supported at the moment for deployment.
+> - <sup>2</sup> We suggest you to use Azure Machine Learning Pipelines with Parallel Run Step.
+> - <sup>3</sup> For deploying MLflow models locally, use the MLflow CLI command `mlflow models serve -m <MODEL_NAME>`. Configure the environment variable `MLFLOW_TRACKING_URI` with the URL of your tracking server.
+> - <sup>4</sup> Data type `mlflow.types.DataType.Binary` is not supported as column type. For models that work with images, we suggest you to use or (a) tensors inputs using the [TensorSpec input type](https://mlflow.org/docs/latest/python_api/mlflow.types.html#mlflow.types.TensorSpec), or (b) `Base64` encoding schemes with a `mlflow.types.DataType.String` column type, which is commonly used when there is a need to encode binary data that needs be stored and transferred over media.
+> - <sup>5</sup> Tensors with unspecified shapes (`-1`) is only supported at the batch size by the moment. For instance, a signature with shape `(-1, -1, -1, 3)` is not supported but `(-1, 300, 300, 3)` is.
+
+For more information about how to specify requests to online-endpoints or the supported file types in batch-endpoints, check [Considerations when deploying to real time inference](#considerations-when-deploying-to-real-time-inference) and [Considerations when deploying to batch inference](#considerations-when-deploying-to-batch-inference).
+
+## Deployment tools
+
+There are three workflows for deploying MLflow models to Azure ML:
+
+- [Deploy using the MLflow plugin](#deploy-using-the-mlflow-plugin)
+- [Deploy using CLI (v2)](#deploy-using-cli-v2)
+- [Deploy using Azure Machine Learning studio](#deploy-using-azure-machine-learning-studio)
+
+### Which option to use?
+
+If you are familiar with MLflow or your platform support MLflow natively (like Azure Databricks) and you wish to continue using the same set of methods, use the `azureml-mlflow` plugin. On the other hand, if you are more familiar with the [Azure ML CLI v2](concept-v2.md), you want to automate deployments using automation pipelines, or you want to keep deployments configuration in a git repository; we recommend you to use the [Azure ML CLI v2](concept-v2.md). If you want to quickly deploy and test models trained with MLflow, you can use [Azure Machine Learning studio](https://ml.azure.com) UI deployment.
+
+## Deploy using the MLflow plugin
+
+The MLflow plugin [azureml-mlflow](https://pypi.org/project/azureml-mlflow/) can deploy models to Azure ML, either to Azure Kubernetes Service (AKS), Azure Container Instances (ACI) and Managed Inference Service (MIR) for real-time serving.
+
+> [!WARNING]
+> Deploying to Managed Inference Service - Batch endpoints is not supported in the MLflow plugin at the moment.
+
+### Prerequisites
+
+* Install the `azureml-mlflow` package.
+* If you are running outside an Azure ML compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. See [MLflow Tracking URI to connect with Azure Machine Learning)[https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow] for more details.
+
+### Deploying models to ACI or AKS
+
+Deployments can be generated using both the Python API for MLflow or MLflow CLI. In both cases, a JSON configuration file can be indicated with the details of the deployment you want to achieve. If not indicated, then a default deployment is done using Azure Container Instances (ACI) and a minimal configuration. The full specification of this configuration for ACI and AKS file can be checked at [Deployment configuration schema](v1/reference-azure-machine-learning-cli.md#deployment-configuration-schema).
+
+#### Configuration example for ACI deployment
+
+```json
+{
+ "computeType": "aci",
+ "containerResourceRequirements":
+ {
+ "cpu": 1,
+ "memoryInGB": 1
+ },
+ "location": "eastus2",
+}
+```
+
+> [!NOTE]
+> - If `containerResourceRequirements` is not indicated, a deployment with minimal compute configuration is applied (cpu: 0.1 and memory: 0.5).
+> - If `location` is not indicated, it defaults to the location of the workspace.
+
+#### Configuration example for an AKS deployment
+
+```json
+{
+ "computeType": "aks",
+ "computeTargetName": "aks-mlflow"
+}
+```
+
+> [!NOTE]
+> - In above exmaple, `aks-mlflow` is the name of an Azure Kubernetes Cluster registered/created in Azure Machine Learning.
+
+#### Running the deployment
+
+The following sample creates a deployment using an ACI:
+
+ ```python
+ import json
+ from mlflow.deployments import get_deploy_client
+
+ # Create the deployment configuration.
+ # If no deployment configuration is provided, then the deployment happens on ACI.
+ deploy_config = {"computeType": "aci"}
+
+ # Write the deployment configuration into a file.
+ deployment_config_path = "deployment_config.json"
+ with open(deployment_config_path, "w") as outfile:
+ outfile.write(json.dumps(deploy_config))
+
+ # Set the tracking uri in the deployment client.
+ client = get_deploy_client("<azureml-mlflow-tracking-url>")
+
+ # MLflow requires the deployment configuration to be passed as a dictionary.
+ config = {"deploy-config-file": deployment_config_path}
+ model_name = "mymodel"
+ model_version = 1
+
+ # define the model path and the name is the service name
+ # if model is not registered, it gets registered automatically and a name is autogenerated using the "name" parameter below
+ client.create_deployment(
+ model_uri=f"models:/{model_name}/{model_version}",
+ config=config,
+ name="mymodel-aci-deployment",
+ )
+ ```
+
+### Deploying models to Managed Inference
+
+Deployments can be generated using both the Python API for MLflow or MLflow CLI. In both cases, a JSON configuration file needs to be indicated with the details of the deployment you want to achieve. The full specification of this configuration can be found at [Managed online deployment schema (v2)](reference-yaml-deployment-managed-online.md).
+
+#### Configuration example for a Managed Inference Service deployment (real time)
+
+```json
+{
+ "instance_type": "Standard_DS2_v2",
+ "instance_count": 1,
+}
+```
+
+#### Running the deployment
+
+The following sample deploys a model to a real time Managed Inference Endpoint:
+
+ ```python
+ import json
+ from mlflow.deployments import get_deploy_client
+
+ # Create the deployment configuration.
+ deploy_config = {
+ "instance_type": "Standard_DS2_v2",
+ "instance_count": 1,
+ }
+
+ # Write the deployment configuration into a file.
+ deployment_config_path = "deployment_config.json"
+ with open(deployment_config_path, "w") as outfile:
+ outfile.write(json.dumps(deploy_config))
+
+ # Set the tracking uri in the deployment client.
+ client = get_deploy_client("<azureml-mlflow-tracking-url>")
+
+ # MLflow requires the deployment configuration to be passed as a dictionary.
+ config = {"deploy-config-file": deployment_config_path}
+ model_name = "mymodel"
+ model_version = 1
+
+ # define the model path and the name is the service name
+ # if model is not registered, it gets registered automatically and a name is autogenerated using the "name" parameter below
+ client.create_deployment(
+ model_uri=f"models:/{model_name}/{model_version}",
+ config=config,
+ name="mymodel-mir-deployment",
+ )
+ ```
+
+## Deploy using CLI (v2)
+
+You can use Azure ML CLI v2 to deploy models trained and logged with MLflow to Managed Inference. When you deploy your MLflow model using the Azure ML CLI v2, it's a no-code-deployment so you don't have to provide a scoring script or an environment, but you can if needed.
+
+### Prerequisites
++
+* You must have a MLflow model. The examples in this article are based on the models from [https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/).
+
+ * If you don't have an MLflow formatted model, you can [convert your custom ML model to MLflow format](how-to-convert-custom-model-to-mlflow.md).
++
+In this code snippet used in this article, the `ENDPOINT_NAME` environment variable contains the name of the endpoint to create and use. To set this, use the following command from the CLI. Replace `<YOUR_ENDPOINT_NAME>` with the name of your endpoint:
++
+### Steps
++
+This example shows how you can deploy an MLflow model to an online endpoint using CLI (v2).
+
+> [!IMPORTANT]
+> For MLflow no-code-deployment, **[testing via local endpoints](how-to-deploy-managed-online-endpoints.md#deploy-and-debug-locally-by-using-local-endpoints)** is currently not supported.
+
+1. Create a YAML configuration file for your endpoint. The following example configures the name and authentication mode of the endpoint:
+
+ __create-endpoint.yaml__
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/create-endpoint.yaml":::
+
+1. To create a new endpoint using the YAML configuration, use the following command:
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_endpoint":::
+
+1. Create a YAML configuration file for the deployment. The following example configures a deployment of the `sklearn-diabetes` model to the endpoint created in the previous step:
+
+ > [!IMPORTANT]
+ > For MLflow no-code-deployment (NCD) to work, setting **`type`** to **`mlflow_model`** is required, `type: mlflow_modelΓÇï`. For more information, see [CLI (v2) model YAML schema](reference-yaml-model.md).
+
+ __sklearn-deployment.yaml__
+
+ :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/sklearn-deployment.yaml":::
+
+1. To create the deployment using the YAML configuration, use the following command:
+
+ :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_sklearn_deployment":::
+
+## Deploy using Azure Machine Learning studio
+
+This example shows how you can deploy an MLflow model to an online endpoint using [Azure Machine Learning studio](https://ml.azure.com).
+
+1. From [studio](https://ml.azure.com), select your workspace and then use the __models__ page to create a new model in the registry. You can use the option *From local files* to select the MLflow model from [https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/mlflow/sklearn-diabetes/model](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/mlflow/sklearn-diabetes/model):
+
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/register-model-ui.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/register-model-ui.png" alt-text="Screenshot showing create option on the Models UI page.":::
+
+2. From [studio](https://ml.azure.com), select your workspace and then use either the __endpoints__ or __models__ page to create the endpoint deployment:
+
+ # [Endpoints page](#tab/endpoint)
+
+ 1. From the __Endpoints__ page, Select **+Create**.
+
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/create-from-endpoints.png" alt-text="Screenshot showing create option on the Endpoints UI page.":::
+
+ 1. Provide a name and authentication type for the endpoint, and then select __Next__.
+ 1. When selecting a model, select the MLflow model registered previously. Select __Next__ to continue.
+
+ 1. When you select a model registered in MLflow format, in the Environment step of the wizard, you don't need a scoring script or an environment.
+
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/ncd-wizard.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/ncd-wizard.png" alt-text="Screenshot showing no code and environment needed for MLflow models.":::
+
+ 1. Complete the wizard to deploy the model to the endpoint.
+
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/review-screen-ncd.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/review-screen-ncd.png" alt-text="Screenshot showing NCD review screen.":::
+
+ # [Models page](#tab/models)
+
+ 1. Select the MLflow model, and then select __Deploy__. When prompted, select __Deploy to real-time endpoint__.
+
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/deploy-from-models-ui.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/deploy-from-models-ui.png" alt-text="Screenshot showing how to deploy model from Models UI.":::
+
+ 1. Complete the wizard to deploy the model to the endpoint.
+
+
+
+### Deploy models after a training job
+
+This section helps you understand how to deploy models to an online endpoint once you have completed your [training job](how-to-train-cli.md).
+
+1. Download the outputs from the training job. The outputs contain the model folder.
+
+ > [!NOTE]
+ > If you have used `mlflow.autolog()` in your training script, you will see model artifacts in the job's run history. Azure Machine Learning integrates with MLflow's tracking functionality. You can use `mlflow.autolog()` for several common ML frameworks to log model parameters, performance metrics, model artifacts, and even feature importance graphs.
+ >
+ > For more information, see [Train models with CLI](how-to-train-cli.md#model-tracking-with-mlflow). Also see the [training job samples](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step) in the GitHub repository.
+
+ # [Azure Machine Learning studio](#tab/studio)
+
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/download-output-logs.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/download-output-logs.png" alt-text="Screenshot showing how to download Outputs and logs from Experimentation run.":::
+
+ # [CLI](#tab/cli)
+
+ ```azurecli
+ az ml job download -n $run_id --outputs
+ ```
+
+2. To deploy using the downloaded files, you can use either studio or the Azure command-line interface. Use the model folder from the outputs for deployment:
+
+ * [Deploy using Azure Machine Learning studio](how-to-deploy-mlflow-models-online-endpoints.md#deploy-using-azure-machine-learning-studio).
+ * [Deploy using Azure Machine Learning CLI (v2)](how-to-deploy-mlflow-models-online-endpoints.md#deploy-using-cli-v2).
+
+## Considerations when deploying to real time inference
+
+The following input's types are supported in Azure ML when deploying models with no-code deployment. Take a look at *Notes* in the bottom of the table for additional considerations.
+
+| Input type | Support in MLflow models (serve) | Support in Azure ML|
+| :- | :-: | :-: |
+| JSON-serialized pandas DataFrames in the split orientation | **&check;** | **&check;** |
+| JSON-serialized pandas DataFrames in the records orientation | **&check;** | <sup>1</sup> |
+| CSV-serialized pandas DataFrames | **&check;** | <sup>2</sup> |
+| Tensor input format as JSON-serialized lists (tensors) and dictionary of lists (named tensors) | | **&check;** |
+| Tensor input formatted as in TF ServingΓÇÖs API | **&check;** | |
+
+> [!NOTE]
+> - <sup>1</sup> We suggest you to use split orientation instead. Records orientation doesn't guarante column ordering preservation.
+> - <sup>2</sup> We suggest you to explore batch inference for processing files.
+
+Regardless of the input type used, Azure Machine Learning requires inputs to be provided in a JSON payload, within a dictionary key `input_data`. Note that such key is not required when serving models using the command `mlflow models serve` and hence payloads can't be used interchangeably.
+
+### Creating requests
+
+Your inputs should be submitted inside a JSON payload containing a dictionary with key `input_data`.
+
+#### Payload example for a JSON-serialized pandas DataFrames in the split orientation
+
+```json
+{
+ "input_data": {
+ "columns": [
+ "age", "sex", "trestbps", "chol", "fbs", "restecg", "thalach", "exang", "oldpeak", "slope", "ca", "thal"
+ ],
+ "index": [1],
+ "data": [
+ [1, 1, 145, 233, 1, 2, 150, 0, 2.3, 3, 0, 2]
+ ]
+ }
+}
+```
+
+#### Payload example for a tensor input
+
+```json
+{
+ "input_data": [
+ [1, 1, 0, 233, 1, 2, 150, 0, 2.3, 3, 0, 2],
+ [1, 1, 0, 233, 1, 2, 150, 0, 2.3, 3, 0, 2]
+ [1, 1, 0, 233, 1, 2, 150, 0, 2.3, 3, 0, 2],
+ [1, 1, 145, 233, 1, 2, 150, 0, 2.3, 3, 0, 2]
+ ]
+}
+```
+
+#### Payload example for a named-tensor input
+
+```json
+{
+ "input_data": {
+ "tokens": [
+ [0, 655, 85, 5, 23, 84, 23, 52, 856, 5, 23, 1]
+ ],
+ "mask": [
+ [0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0]
+ ]
+ }
+}
+```
+
+## Considerations when deploying to batch inference
+
+Azure ML supports no-code deployment for batch inference in Managed Inference service. This represents a convenient way to deploy models that require processing of big amounts of data in a batch-fashion.
+
+### How work is distributed on workers
+
+Work is distributed at the file level, for both structured and unstructured data. As a consequence, only [file datasets](v1/how-to-create-register-datasets.md#filedataset) or [URI folders](reference-yaml-data.md) are supported for this feature. Each worker processes batches of `Mini batch size` files at a time. Further parallelism can be achieved if `Max concurrency per instance` is increased.
+
+> [!WARNING]
+> Nested folder structures are not explored during inference. If you are partitioning your data using folders, make sure to flatten the structure beforehand.
+
+### File's types support
+
+The following data types are supported for batch inference.
+
+| File extension | Type returned as model's input | Signature requirement |
+| :- | :- | :- |
+| `.csv` | `pd.DataFrame` | `ColSpec`. If not provided, columns typing is not enforced. |
+| `.png`, `.jpg`, `.jpeg`, `.tiff`, `.bmp`, `.gif` | `np.ndarray` | `TensorSpec`. Input is reshaped to match tensors shape if available. If no signature is available, tensors of type `np.uint8` are inferred. |
++
+## Next steps
+
+To learn more, review these articles:
+
+- [Deploy models with REST (preview)](how-to-deploy-with-rest.md)
+- [Create and use online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md)
+- [Safe rollout for online endpoints](how-to-safely-rollout-managed-endpoints.md)
+- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md)
+- [Use batch endpoints for batch scoring](how-to-use-batch-endpoint.md)
+- [View costs for an Azure Machine Learning managed online endpoint (preview)](how-to-view-online-endpoints-costs.md)
+- [Access Azure resources with an online endpoint and managed identity (preview)](how-to-access-resources-from-endpoints-managed-identities.md)
+- [Troubleshoot online endpoint deployment](how-to-troubleshoot-managed-online-endpoints.md)
machine-learning How To Deploy Model Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-cognitive-search.md
Last updated 03/11/2021
+ # Deploy a model for use with Cognitive Search [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
When deploying a model for use with Azure Cognitive Search, the deployment must
## Connect to your workspace
-An Azure Machine Learning workspace provides a centralized place to work with all the artifacts you create when you use Azure Machine Learning. The workspace keeps a history of all training runs, including logs, metrics, output, and a snapshot of your scripts.
+An Azure Machine Learning workspace provides a centralized place to work with all the artifacts you create when you use Azure Machine Learning. The workspace keeps a history of all training jobs, including logs, metrics, output, and a snapshot of your scripts.
To connect to an existing workspace, use the following code:
machine-learning How To Designer Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-designer-transform-data.md
Now that your pipeline is set up to split the data, you need to specify where to
![Screenshot showing how to configure the Export Data components](media/how-to-designer-transform-data/us-income-export-data.png).
-### Submit the run
+### Submit the job
-Now that your pipeline is setup to split and export the data, submit a pipeline run.
+Now that your pipeline is setup to split and export the data, submit a pipeline job.
1. At the top of the canvas, select **Submit**.
-1. In the **Set up pipeline run** dialog, select **Create new** to create an experiment.
+1. In the **Set up pipeline job** dialog, select **Create new** to create an experiment.
- Experiments logically group together related pipeline runs. If you run this pipeline in the future, you should use the same experiment for logging and tracking purposes.
+ Experiments logically group together related pipeline jobs. If you run this pipeline in the future, you should use the same experiment for logging and tracking purposes.
1. Provide a descriptive experiment name like "split-census-data".
machine-learning How To Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-export-delete-data.md
Last updated 10/21/2021
++ # Export or delete your Machine Learning service workspace data In Azure Machine Learning, you can export or delete your workspace data using either the portal's graphical interface or the Python SDK. This article describes both options.
In Azure Machine Learning, you can export or delete your workspace data using ei
In-product data stored by Azure Machine Learning is available for export and deletion. You can export and delete using Azure Machine Learning studio, CLI, and SDK. Telemetry data can be accessed through the Azure Privacy portal.
-In Azure Machine Learning, personal data consists of user information in run history documents.
+In Azure Machine Learning, personal data consists of user information in job history documents.
## Delete high-level resources using the portal
These resources can be deleted by selecting them from the list and choosing **De
:::image type="content" source="media/how-to-export-delete-data/delete-resource-group-resources.png" alt-text="Screenshot of portal, with delete icon highlighted":::
-Run history documents, which may contain personal user information, are stored in the storage account in blob storage, in subfolders of `/azureml`. You can download and delete the data from the portal.
+Job history documents, which may contain personal user information, are stored in the storage account in blob storage, in subfolders of `/azureml`. You can download and delete the data from the portal.
:::image type="content" source="media/how-to-export-delete-data/storage-account-folders.png" alt-text="Screenshot of azureml directory in storage account, within the portal":::
Run history documents, which may contain personal user information, are stored i
Azure Machine Learning studio provides a unified view of your machine learning resources, such as notebooks, datasets, models, and experiments. Azure Machine Learning studio emphasizes preserving a record of your data and experiments. Computational resources such as pipelines and compute resources can be deleted using the browser. For these resources, navigate to the resource in question and choose **Delete**.
-Datasets can be unregistered and Experiments can be archived, but these operations don't delete the data. To entirely remove the data, datasets and experiment data must be deleted at the storage level. Deleting at the storage level is done using the portal, as described previously. An individual Run can be deleted directly in studio. Deleting a Run deletes the Run's data.
+Datasets can be unregistered and Experiments can be archived, but these operations don't delete the data. To entirely remove the data, datasets and experiment data must be deleted at the storage level. Deleting at the storage level is done using the portal, as described previously. An individual Job can be deleted directly in studio. Deleting a Job deletes the Job's data.
> [!NOTE] > Prior to unregistering a Dataset, use its **Data source** link to find the specific Data URL to delete.
-You can download training artifacts from experimental runs using the Studio. Choose the **Experiment** and **Run** in which you're interested. Choose **Output + logs** and navigate to the specific artifacts you wish to download. Choose **...** and **Download**.
+You can download training artifacts from experimental jobs using the Studio. Choose the **Experiment** and **Job** in which you're interested. Choose **Output + logs** and navigate to the specific artifacts you wish to download. Choose **...** and **Download**.
You can download a registered model by navigating to the **Model** and choosing **Download**.
You can download a registered model by navigating to the **Model** and choosing
## Export and delete resources using the Python SDK
-You can download the outputs of a particular run using:
+You can download the outputs of a particular job using:
```python # Retrieved from Azure Machine Learning web UI
machine-learning How To Github Actions Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-github-actions-machine-learning.md
The file has four sections:
||| |**Authentication** | 1. Define a service principal. <br /> 2. Create a GitHub secret. | |**Connect** | 1. Connect to the machine learning workspace. <br /> 2. Connect to a compute target. |
-|**Run** | 1. Submit a training run. |
+|**Job** | 1. Submit a training job. |
|**Deploy** | 1. Register model in Azure Machine Learning registry. 1. Deploy the model. | ## Create repository
Use the [Azure Machine Learning Compute action](https://github.com/Azure/aml-com
with: azure_credentials: ${{ secrets.AZURE_CREDENTIALS }} ```
-## Submit training run
+## Submit training job
Use the [Azure Machine Learning Training action](https://github.com/Azure/aml-run) to submit a ScriptRun, an Estimator or a Pipeline to Azure Machine Learning.
machine-learning How To High Availability Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-high-availability-machine-learning.md
By keeping your data storage isolated from the default storage the workspace use
### Manage machine learning artifacts as code
-Runs in Azure Machine Learning are defined by a run specification. This specification includes dependencies on input artifacts that are managed on a workspace-instance level, including environments, datasets, and compute. For multi-region run submission and deployments, we recommend the following practices:
+Jobs in Azure Machine Learning are defined by a job specification. This specification includes dependencies on input artifacts that are managed on a workspace-instance level, including environments, datasets, and compute. For multi-region job submission and deployments, we recommend the following practices:
* Manage your code base locally, backed by a Git repository. * Export important notebooks from Azure Machine Learning studio.
Runs in Azure Machine Learning are defined by a run specification. This specific
* Manage configurations as code. * Avoid hardcoded references to the workspace. Instead, configure a reference to the workspace instance using a [config file](how-to-configure-environment.md#workspace) and use [Workspace.from_config()](/python/api/azureml-core/azureml.core.workspace.workspace#remarks) to initialize the workspace. To automate the process, use the [Azure CLI extension for machine learning](v1/reference-azure-machine-learning-cli.md) command [az ml folder attach](/cli/azure/ml(v1)/folder#az-ml(v1)-folder-attach).
- * Use run submission helpers such as [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) and [Pipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline(class)).
+ * Use job submission helpers such as [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) and [Pipeline](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipeline(class)).
* Use [Environments.save_to_directory()](/python/api/azureml-core/azureml.core.environment(class)#save-to-directory-path--overwrite-false-) to save your environment definitions. * Use a Dockerfile if you use custom Docker images. * Use the [Dataset](/python/api/azureml-core/azureml.core.dataset(class)) class to define the collection of data [paths](/python/api/azureml-core/azureml.data.datapath) used by your solution.
Runs in Azure Machine Learning are defined by a run specification. This specific
### Continue work in the failover workspace
-When your primary workspace becomes unavailable, you can switch over the secondary workspace to continue experimentation and development. Azure Machine Learning does not automatically submit runs to the secondary workspace if there is an outage. Update your code configuration to point to the new workspace resource. We recommend to avoiding hardcoding workspace references. Instead, use a [workspace config file](how-to-configure-environment.md#workspace) to minimize manual user steps when changing workspaces. Make sure to also update any automation, such as continuous integration and deployment pipelines to the new workspace.
+When your primary workspace becomes unavailable, you can switch over the secondary workspace to continue experimentation and development. Azure Machine Learning does not automatically submit jobs to the secondary workspace if there is an outage. Update your code configuration to point to the new workspace resource. We recommend to avoiding hardcoding workspace references. Instead, use a [workspace config file](how-to-configure-environment.md#workspace) to minimize manual user steps when changing workspaces. Make sure to also update any automation, such as continuous integration and deployment pipelines to the new workspace.
-Azure Machine Learning cannot sync or recover artifacts or metadata between workspace instances. Dependent on your application deployment strategy, you might have to move artifacts or recreate experimentation inputs such as dataset objects in the failover workspace in order to continue run submission. In case you have configured your primary workspace and secondary workspace resources to share associated resources with geo-replication enabled, some objects might be directly available to the failover workspace. For example, if both workspaces share the same docker images, configured datastores, and Azure Key Vault resources. The following diagram shows a configuration where two workspaces share the same images (1), datastores (2), and Key Vault (3).
+Azure Machine Learning cannot sync or recover artifacts or metadata between workspace instances. Dependent on your application deployment strategy, you might have to move artifacts or recreate experimentation inputs such as dataset objects in the failover workspace in order to continue job submission. In case you have configured your primary workspace and secondary workspace resources to share associated resources with geo-replication enabled, some objects might be directly available to the failover workspace. For example, if both workspaces share the same docker images, configured datastores, and Azure Key Vault resources. The following diagram shows a configuration where two workspaces share the same images (1), datastores (2), and Key Vault (3).
![Reference resource configuration](./media/how-to-high-availability-machine-learning/bcdr-resource-configuration.png)
The following artifacts can be exported and imported between workspaces by using
> [!TIP] > * __Registered datasets__ cannot be downloaded or moved. This includes datasets generated by Azure ML, such as intermediate pipeline datasets. However datasets that refer to a shared file location that both workspaces can access, or where the underlying data storage is replicated, can be registered on both workspaces. Use the [az ml dataset register](/cli/azure/ml(v1)/dataset#ml-az-ml-dataset-register) to register a dataset.
-> * __Run outputs__ are stored in the default storage account associated with a workspace. While run outputs might become inaccessible from the studio UI in the case of a service outage, you can directly access the data through the storage account. For more information on working with data stored in blobs, see [Create, download, and list blobs with Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md).
+> * __Job outputs__ are stored in the default storage account associated with a workspace. While job outputs might become inaccessible from the studio UI in the case of a service outage, you can directly access the data through the storage account. For more information on working with data stored in blobs, see [Create, download, and list blobs with Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md).
## Recovery options
machine-learning How To Log Pipelines Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-pipelines-application-insights.md
Last updated 10/21/2021
+ # Collect machine learning pipeline log files in Application Insights for alerts and debugging [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v1.md)]
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
Logs can help you diagnose errors and warnings, or track performance metrics lik
> [!TIP]
-> This article shows you how to monitor the model training process. If you're interested in monitoring resource usage and events from Azure Machine learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
+> This article shows you how to monitor the model training process. If you're interested in monitoring resource usage and events from Azure Machine learning, such as quotas, completed training jobs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
## Prerequisites
The following table describes how to log specific value types:
|Log numpy metrics or PIL image objects|`mlflow.log_image(img, 'figure.png')`|| |Log matlotlib plot or image file|` mlflow.log_figure(fig, "figure.png")`||
-## Log a training run with MLflow
+## Log a training job with MLflow
To set up for logging with MLflow, import `mlflow` and set the tracking URI:
ws = Workspace.from_config()
mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri()) ```
-### Interactive runs
+### Interactive jobs
When training interactively, such as in a Jupyter Notebook, use the following pattern: 1. Create or set the active experiment.
-1. Start the run.
+1. Start the job.
1. Use logging methods to log metrics and other information.
-1. End the run.
+1. End the job.
-For example, the following code snippet demonstrates setting the tracking URI, creating an experiment, and then logging during a run
+For example, the following code snippet demonstrates setting the tracking URI, creating an experiment, and then logging during a job
```python from mlflow.tracking import MlflowClient
For remote training runs, the tracking URI and experiment are set automatically.
To save the model from a training run, use the `log_model()` API for the framework you're working with. For example, [mlflow.sklearn.log_model()](https://mlflow.org/docs/latest/python_api/mlflow.sklearn.html#mlflow.sklearn.log_model). For frameworks that MLflow doesn't support, see [Convert custom models to MLflow](how-to-convert-custom-model-to-mlflow.md).
-## View run information
+## View job information
You can view the logged information using MLflow through the [MLflow.entities.Run](https://mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.Run) object. After a training job completes, you can retrieve it using the [MlFlowClient()](https://mlflow.org/docs/latest/python_api/mlflow.tracking.html#mlflow.tracking.MlflowClient):
params = finished_mlflow_run.data.params
<a name="view-the-experiment-in-the-web-portal"></a>
-## View run metrics in the studio UI
+## View job metrics in the studio UI
-You can browse completed run records, including logged metrics, in the [Azure Machine Learning studio](https://ml.azure.com).
+You can browse completed job records, including logged metrics, in the [Azure Machine Learning studio](https://ml.azure.com).
-Navigate to the **Experiments** tab. To view all your runs in your Workspace across Experiments, select the **All runs** tab. You can drill down on runs for specific Experiments by applying the Experiment filter in the top menu bar.
+Navigate to the **Jobs** tab. To view all your jobs in your Workspace across Experiments, select the **All jobs** tab. You can drill down on jobs for specific Experiments by applying the Experiment filter in the top menu bar.
For the individual Experiment view, select the **All experiments** tab. On the experiment run dashboard, you can see tracked metrics and logs for each run.
-You can also edit the run list table to select multiple runs and display either the last, minimum, or maximum logged value for your runs. Customize your charts to compare the logged metrics values and aggregates across multiple runs. You can plot multiple metrics on the y-axis of your chart and customize your x-axis to plot your logged metrics.
+You can also edit the job list table to select multiple jobs and display either the last, minimum, or maximum logged value for your jobs. Customize your charts to compare the logged metrics values and aggregates across multiple jobs. You can plot multiple metrics on the y-axis of your chart and customize your x-axis to plot your logged metrics.
-### View and download log files for a run
+### View and download log files for a job
Log files are an essential resource for debugging the Azure ML workloads. After submitting a training job, drill down to a specific run to view its logs and outputs:
-1. Navigate to the **Experiments** tab.
+1. Navigate to the **Jobs** tab.
1. Select the runID for a specific run. 1. Select **Outputs and logs** at the top of the page. 2. Select **Download all** to download all your logs into a zip folder.
machine-learning How To Machine Learning Fairness Aml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-fairness-aml.md
The following example shows how to use the fairness package. We will upload mode
If you complete the previous steps (uploading generated fairness insights to Azure Machine Learning), you can view the fairness dashboard in [Azure Machine Learning studio](https://ml.azure.com). This dashboard is the same visualization dashboard provided in Fairlearn, enabling you to analyze the disparities among your sensitive feature's subgroups (e.g., male vs. female). Follow one of these paths to access the visualization dashboard in Azure Machine Learning studio:
- * **Experiments pane (Preview)**
- 1. Select **Experiments** in the left pane to see a list of experiments that you've run on Azure Machine Learning.
+ * **Jobs pane (Preview)**
+ 1. Select **Jobs** in the left pane to see a list of experiments that you've run on Azure Machine Learning.
1. Select a particular experiment to view all the runs in that experiment. 1. Select a run, and then the **Fairness** tab to the explanation visualization dashboard. 1. Once landing on the **Fairness** tab, click on a **fairness id** from the menu on the right.
machine-learning How To Manage Environments In Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-in-studio.md
For a high-level overview of how environments work in Azure Machine Learning, se
## Browse curated environments
-Curated environments contain collections of Python packages and are available in your workspace by default. These environments are backed by cached Docker images which reduces the run preparation cost and support training and inferencing scenarios.
+Curated environments contain collections of Python packages and are available in your workspace by default. These environments are backed by cached Docker images which reduces the job preparation cost and support training and inferencing scenarios.
Click on an environment to see detailed information about its contents. For more information, see [Azure Machine Learning curated environments](resource-curated-environments.md).
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
The **Edit and submit** button opens the **Create a new Automated ML run** wizar
Once you have the best model at hand, it is time to deploy it as a web service to predict on new data. >[!TIP]
-> If you are looking to deploy a model that was generated via the `automl` package with the Python SDK, you must [register your model](how-to-deploy-and-where.md?tabs=python#register-a-model-from-an-azure-ml-training-run-1) to the workspace.
+> If you are looking to deploy a model that was generated via the `automl` package with the Python SDK, you must [register your model](how-to-deploy-and-where.md?tabs=python#register-a-model-from-an-azure-ml-training-job-1) to the workspace.
> > Once you're model is registered, find it in the studio by selecting **Models** on the left pane. Once you open your model, you can select the **Deploy** button at the top of the screen, and then follow the instructions as described in **step 2** of the **Deploy your model** section.
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
After you create your Azure Databricks workspace and cluster,
1. Connect your Azure Databricks workspace and Azure Machine Learning workspace.
-Additional detail for these steps are in the following sections so you can successfully run your MLflow experiments with Azure Databricks.
+Additional details for these steps are in the following sections so you can successfully run your MLflow experiments with Azure Databricks.
## Install libraries
To link your ADB workspace to a new or existing Azure Machine Learning workspace
![Link Azure DB and Azure Machine Learning workspaces](./media/how-to-use-mlflow-azure-databricks/link-workspaces.png)
+> [!NOTE]
+> MLflow Tracking in a [private link enabled Azure Machine Learning workspace](how-to-configure-private-link.md) is not supported.
## MLflow Tracking in your workspaces
-After you instantiate your workspace, MLflow Tracking is automatically set to be tracked in all of the following places:
+After you link your Azure Databricks workspace with your Azure Machine Learning workspace, MLflow Tracking is automatically set to be tracked in all of the following places:
* The linked Azure Machine Learning workspace. * Your original ADB workspace.
-All your experiments land in the managed Azure Machine Learning tracking service.
-
-The following code should be in your experiment notebook to get your linked Azure Machine Learning workspace.
-
-This code,
-
-* Gets the details of your Azure subscription to instantiate your Azure Machine Learning workspace.
-
-* Assumes you have an existing resource group and Azure Machine Learning workspace, otherwise you can [create them](how-to-manage-workspace.md).
-
-* Sets the experiment name. The `user_name` here is consistent with the `user_name` associated with the Azure Databricks workspace.
+You can use then MLflow in Azure Databricks in the same way as you are used to. The following example sets the experiment name as it is usually done in Azure Databricks:
```python
-import mlflow
-import mlflow.azureml
-import azureml.mlflow
-import azureml.core
-
-from azureml.core import Workspace
-
-subscription_id = 'subscription_id'
-
-# Azure Machine Learning resource group NOT the managed resource group
-resource_group = 'resource_group_name'
-
-#Azure Machine Learning workspace name, NOT Azure Databricks workspace
-workspace_name = 'workspace_name'
-
-# Instantiate Azure Machine Learning workspace
-ws = Workspace.get(name=workspace_name,
- subscription_id=subscription_id,
- resource_group=resource_group)
+import mlflow
#Set MLflow experiment. experimentName = "/Users/{user_name}/{experiment_folder}/{experiment_name}" mlflow.set_experiment(experimentName)
+```
+In your training script, import `mlflow` to use the MLflow logging APIs, and start logging your run metrics. The following example, logs the epoch loss metric.
+```python
+import mlflow
+mlflow.log_metric('epoch_loss', loss.item())
```
-> [!NOTE]
-> MLflow Tracking in a [private link enabled Azure Machine Learning workspace](how-to-configure-private-link.md) is not supported.
+> [!NOTE]
+> As opposite to tracking, model registries don't support registering models at the same time on both Azure Machine Learning and Azure Databricks. Either one or the other has to be used. Please read the section [Registering models in the registry with MLflow](#registering-models-in-the-registry-with-mlflow) for more details.
### Set MLflow Tracking to only track in your Azure Machine Learning workspace
-If you prefer to manage your tracked experiments in a centralized location, you can set MLflow tracking to **only** track in your Azure Machine Learning workspace.
-
-Include the following code in your script:
+If you prefer to manage your tracked experiments in a centralized location, you can set MLflow tracking to **only** track in your Azure Machine Learning workspace. This configuration has the advantage of enabling easier path to deployment using Azure Machine Learning deployment options.
+
+You have to configure the MLflow tracking URI to point exclusively to Azure Machine Learning, as it is demonstrated in the following example:
+
+ # [Using the Azure ML SDK v2](#tab/sdkv2)
+
+ You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the cluster you are using:
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.identity import DeviceCodeCredential
+
+ subscription_id = ""
+ aml_resource_group = ""
+ aml_workspace_name = ""
+
+ ml_client = MLClient(credential=DeviceCodeCredential(),
+ subscription_id=subscription_id,
+ resource_group_name=aml_resource_group)
+
+ azureml_mlflow_uri = ml_client.workspaces.get(aml_workspace_name).mlflow_tracking_uri
+ mlflow.set_tracking_uri(azureml_mlflow_uri)
+ ```
+
+ # [Building the MLflow tracking URI](#tab/custom)
+
+ The Azure Machine Learning Tracking URI can be constructed using the subscription ID, region of where the resource is deployed, resource group name and workspace name. The following code sample shows how:
+
+ ```python
+ import mlflow
+
+ aml_region = ""
+ subscription_id = ""
+ aml_resource_group = ""
+ aml_workspace_name = ""
+
+ azureml_mlflow_uri = f"azureml://{aml_region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{aml_resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{aml_workspace_name}"
+ mlflow.set_tracking_uri(azureml_mlflow_uri)
+ ```
+
+ > [!NOTE]
+ > You can also get this URL by: Navigate to the [Azure ML Studio web portal](https://ml.azure.com) -> Clic on the uper-right corner of the page -> View all properties in Azure Portal -> MLflow tracking URI.
+
+
+
+#### Experiment's names in Azure Machine Learning
+
+When MLflow is configured to exclusively track experiments in Azure Machine Learning workspace, the experiment's naming convention has to follow the one used by Azure Machine Learning. In Azure Databricks, experiments are named with the path to where the experiment is saved like `/Users/alice@contoso.com/iris-classifier`. However, in Azure Machine Learning, you have to provide the experiment name directly. As in the previous example, the same experiment would be named `iris-classifier` directly:
```python
-uri = ws.get_mlflow_tracking_uri()
-mlflow.set_tracking_uri(uri)
+mlflow.set_experiment(experiment_name="experiment-name")
```
-In your training script, import `mlflow` to use the MLflow logging APIs, and start logging your run metrics. The following example, logs the epoch loss metric.
+## Logging models with MLflow
+
+After your model is trained, you can log it to the tracking server with the `mlflow.<model_flavor>.log_model()` method. `<model_flavor>`, refers to the framework associated with the model. [Learn what model flavors are supported](https://mlflow.org/docs/latest/models.html#model-api). In the following example, a model created with the Spark library MLLib is being registered. It's worth to mention that the flavor `spark` doesn't correspond to the fact that we are training a model in a Spark cluster but because of the training framework it was used (you can perfectly train a model using TensorFlow with Spark and hence the flavor to use would be `tensorflow`.
```python
-import mlflow
-mlflow.log_metric('epoch_loss', loss.item())
+mlflow.spark.log_model(model, artifact_path = "model")
```
-## Register models with MLflow
+Models are logged inside of the run being tracked. That means that models are available in either both Azure Databricks and Azure Machine Learning (default) or exclusively in Azure Machine Learning if you configured the tracking URI to point to it.
-After your model is trained, you can log and register your models to the backend tracking server with the `mlflow.<model_flavor>.log_model()` method. `<model_flavor>`, refers to the framework associated with the model. [Learn what model flavors are supported](https://mlflow.org/docs/latest/models.html#model-api).
+> [!IMPORTANT]
+> Notice that here the parameter `registered_model_name` has not been specified. Read the section [Registering models in the registry with MLflow](#registering-models-in-the-registry-with-mlflow) for more details about the implications of such parameter and how the registry works.
-The backend tracking server is the Azure Databricks workspace by default; unless you chose to [set MLflow Tracking to only track in your Azure Machine Learning workspace](#set-mlflow-tracking-to-only-track-in-your-azure-machine-learning-workspace), then the backend tracking server is the Azure Machine Learning workspace.
+## Registering models in the registry with MLflow
-* **If a registered model with the name doesnΓÇÖt exist**, the method registers a new model, creates version 1, and returns a ModelVersion MLflow object.
+As opposite to tracking, **model registries can't operate** at the same time in Azure Databricks and Azure Machine Learning. Either one or the other has to be used. By default, the Azure Databricks workspace is used for model registries; unless you chose to [set MLflow Tracking to only track in your Azure Machine Learning workspace](#set-mlflow-tracking-to-only-track-in-your-azure-machine-learning-workspace), then the model registry is the Azure Machine Learning workspace.
-* **If a registered model with the name already exists**, the method creates a new model version and returns the version object.
+Then, considering you are using the default configuration, the following line will log a model inside the corresponding runs of both Azure Databricks and Azure Machine Learning, but it will register it only on Azure Databricks:
```python mlflow.spark.log_model(model, artifact_path = "model", registered_model_name = 'model_name')
+```
+
+* **If a registered model with the name doesnΓÇÖt exist**, the method registers a new model, creates version 1, and returns a ModelVersion MLflow object.
-mlflow.sklearn.log_model(model, artifact_path = "model",
- registered_model_name = 'model_name')
+* **If a registered model with the name already exists**, the method creates a new model version and returns the version object.
+
+### Registering models in the Azure Machine Learning Registry with MLflow
+
+At some point you may want to start registering models in Azure Machine Learning. Such configuration has the advantage of enabling all the deployment capabilities of Azure Machine Learning automatically, including no-code-deployment and model management capabilities. In that case, we recommend you to [set MLflow Tracking to only track in your Azure Machine Learning workspace](#set-mlflow-tracking-to-only-track-in-your-azure-machine-learning-workspace). This will remove the ambiguity of where models are being registered.
+
+If you want to continue using the dual-tracking capabilities but register models in Azure Machine Learning you can instruct MLflow to use Azure ML for model registries by configuring the MLflow Model Registry URI. This URI has the exact same format and value that the MLflow tracking URI.
+
+```python
+mlflow.set_registry_uri(azureml_mlflow_uri)
```
-## Create endpoints for MLflow models
+> [!NOTE]
+> The value of `azureml_mlflow_uri` was obtained in the same way it was demostrated in [Set MLflow Tracking to only track in your Azure Machine Learning workspace](#set-mlflow-tracking-to-only-track-in-your-azure-machine-learning-workspace)
+
+For a complete example about this scenario please check the example [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb).
+
+## Deploying and consuming models registered in Azure Machine Learning
-When you are ready to create an endpoint for your ML models. You can deploy as,
+Model registered in Azure Machine Learning Service using MLflow can be consumed as:
-* An Azure Machine Learning Request-Response web service for interactive scoring. This deployment allows you to leverage and apply the Azure Machine Learning model management, and data drift detection capabilities to your production models.
+* An Azure Machine Learning endpoint (real-time and batch): This deployment allows you to leverage Azure Machine Learning deployment capabilities for both real-time and batch inference in Azure Container Instances (ACI) Azure Kubernetes (AKS) or our Managed Inference endpoints (MIR).
-* MLFlow model objects, which can be used in streaming or batch pipelines as Python functions or Pandas UDFs in Azure Databricks workspace.
+* MLFlow model objects or Pandas UDFs, which can be used in Azure Databricks notebooks in streaming or batch pipelines.
### Deploy models to Azure Machine Learning endpoints
-You can leverage the [mlflow.azureml.deploy](https://www.mlflow.org/docs/latest/python_api/mlflow.azureml.html#mlflow.azureml.deploy) API to deploy a model to your Azure Machine Learning workspace. If you only registered the model to the Azure Databricks workspace, as described in the [register models with MLflow](#register-models-with-mlflow) section, specify the `model_name` parameter to register the model into Azure Machine Learning workspace.
+You can leverage the `azureml-mlflow` plugin to deploy a model to your Azure Machine Learning workspace. Check [How to deploy MLflow models](how-to-deploy-mlflow-models.md) page for a complete detail about how to deploy models to the different targets.
-Azure Databricks runs can be deployed to the following endpoints,
-* [Azure Container Instance](how-to-deploy-mlflow-models.md#deploy-to-azure-container-instance-aci)
-* [Azure Kubernetes Service](how-to-deploy-mlflow-models.md#deploy-to-azure-kubernetes-service-aks)
+> [!IMPORTANT]
+> Models need to be registered in Azure Machine Learning registry in order to deploy them. If your models happen to be registered in the MLflow instance inside Azure Databricks, you will have to register them again in Azure Machine Learning. If this is you case, please check the example [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb)
-### Deploy models to ADB endpoints for batch scoring
+### Deploy models to ADB for batch scoring using UDFs
You can choose Azure Databricks clusters for batch scoring. The MLFlow model is loaded and used as a Spark Pandas UDF to score new data.
If you don't plan to use the logged metrics and artifacts in your workspace, the
## Example notebooks
-The [MLflow with Azure Machine Learning notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow) demonstrate and expand upon concepts presented in this article.
+The [Training models in Azure Databricks and deploying them on Azure ML](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb) demonstrates how to train models in Azure Databricks and deploy them in Azure ML. It also includes how to handle cases where you also want to track the experiments and models with the MLflow instance in Azure Databricks and leverage Azure ML for deployment.
## Next steps * [Deploy MLflow models as an Azure web service](how-to-deploy-mlflow-models.md).
machine-learning How To Use Pipeline Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipeline-ui.md
If your pipeline fails or gets stuck on a node, first view the logs.
The **user_logs folder** contains information about user code generated logs. This folder is open by default, and the **std_log.txt** log is selected. The **std_log.txt** is where your code's logs (for example, print statements) show up.
- The **system_logs folder** contains logs generated by Azure Machine Learning. Learn more about [how to view and download log files for a run](how-to-log-view-metrics.md#view-and-download-log-files-for-a-run).
+ The **system_logs folder** contains logs generated by Azure Machine Learning. Learn more about [how to view and download log files for a run](how-to-log-view-metrics.md#view-and-download-log-files-for-a-job).
:::image type="content" source="./media/how-to-use-pipeline-ui/view-user-log.png" alt-text="Screenshot showing the user logs of a node." lightbox= "./media/how-to-use-pipeline-ui/view-user-log.png":::
mariadb Quickstart Create Mariadb Server Database Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-arm-template.md
read -p "Press [ENTER] to continue: "
For a step-by-step tutorial that guides you through the process of creating an ARM template, see: > [!div class="nextstepaction"]
-> [ Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
+> [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md)
marketplace Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics.md
description: Access analytic reports to monitor sales, evaluate performance, and
---++ Last updated 06/21/2022
marketplace Azure Vm Get Sas Uri https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-get-sas-uri.md
description: Generate a shared access signature (SAS) URI for a virtual hard dis
++ Last updated 06/23/2021
marketplace Isv App License https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/isv-app-license.md
Previously updated : 05/25/2022 Last updated : 06/23/2022 # ISV app license management
The ISV creates a solution package for the offer that includes license plan info
| Transactable offers | Licensable-only offers | | | - |
-| Customers can manage subscriptions for the Apps they purchased in [Microsoft 365 admin center](https://admin.microsoft.com/), just like they normally do for any of their Microsoft Office or Dynamics subscriptions. | ISVs activate and manage deals in Partner Center [deal registration portal](https://partner.microsoft.com/) |
+| Customers can manage subscriptions for the Apps they purchased in [Microsoft 365 admin center](https://admin.microsoft.com/), just like they normally do for any of their Microsoft Office or Dynamics subscriptions. | ISVs activate and manage deals in Partner Center [deal registration](/partner-center/register-deals) portal |
### Step 5: Assign licenses
marketplace Price Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/price-changes.md
The price change feature supports the following scenarios:
### Supported offer types
-The ability to change prices is available for both public and private plans of all offers transacted through Microsoft: Azure application (Managed App), Software as a service, and Virtual Machine.
+The ability to change prices is available for both public and private plans of offers transacted through Microsoft.
+
+Supported offer types:
+- Azure application (Managed App)
+- Software as a service (SaaS)
+- Azure virtual machine.
+
+Price changes for the following offer types are not yet supported:
+- Dynamics 365 apps on Dataverse and Power Apps
+- Power BI visual
+- Azure container
### Unsupported scenarios and limitations
mysql Concepts Networking Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-vnet.md
Here are some concepts to be familiar with when using virtual networks with MySQ
* If you use Azure API, an Azure Resource Manager template (ARM template), or Terraform, please create private DNS zones that end with `mysql.database.azure.com` and use them while configuring flexible servers with private access. For more information, see the [private DNS zone overview](../../dns/private-dns-overview.md). > [!IMPORTANT]
- > Private DNS zone names must end with `mysql.database.azure.com`.
- >If you are connecting to the Azure Database for MySQL - Flexible sever with SSL and are using an option to perform full verification (sslmode=VERTIFY_IDENTITY) with certificate subject name, use \<servername\>.mysql.database.azure.com in your connection string.
+ > Private DNS zone names must end with `mysql.database.azure.com`. If you are connecting to the Azure Database for MySQL - Flexible sever with SSL and are using an option to perform full verification (sslmode=VERTIFY_IDENTITY) with certificate subject name, use \<servername\>.mysql.database.azure.com in your connection string.
Learn how to create a flexible server with private access (VNet integration) in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md). ## Integration with custom DNS server
-If you are using the custom DNS server then you must use a DNS forwarder to resolve the FQDN of Azure Database for MySQL - Flexible Server. The forwarder IP address should be [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md). The custom DNS server should be inside the VNet or reachable via the VNET's DNS Server setting. Refer to [name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
+If you are using the custom DNS server then you must **use a DNS forwarder to resolve the FQDN of Azure Database for MySQL - Flexible Server**. The forwarder IP address should be [168.63.129.16](../../virtual-network/what-is-ip-address-168-63-129-16.md). The custom DNS server should be inside the VNet or reachable via the VNET's DNS Server setting. Refer to [name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
+> [!IMPORTANT]
+ > For successful provisioning of the Flexible Server, even if you are using a custom DNS server, **you must not block DNS traffic to [AzurePlatformDNS](../../virtual-network/service-tags-overview.md) using [NSG](../../virtual-network/network-security-groups-overview.md)**.
## Private DNS zone and VNET peering Private DNS zone settings and VNET peering are independent of each other. Please refer to the [Using Private DNS Zone](concepts-networking-vnet.md#using-private-dns-zone) section above for more details on creating and using Private DNS zones.
You can then use the flexible servername (FQDN) to connect from the client appli
* After the flexible server is deployed to a virtual network and subnet, you cannot move it to another virtual network or subnet. You cannot move the virtual network into another resource group or subscription. * Subnet size (address spaces) cannot be increased once resources exist in the subnet * Flexible server doesn't support Private Link. Instead, it uses VNet injection to make flexible server available within a VNet.-
-> [!NOTE]
-> If you are using a custom DNS server, then you must use a DNS forwarder to resolve the following FQDNs:
-> * Azure Database for MySQL - Flexible Server
-> * Azure Storage Resources (for successful provisioning of the Flexible Server)
->
-> Refer to [name resolution that uses your own DNS server](../../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) to learn more.
- ## Next steps * Learn how to enable private access (vnet integration) using the [Azure portal](how-to-manage-virtual-network-portal.md) or [Azure CLI](how-to-manage-virtual-network-cli.md)
mysql Quickstart Create Mysql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-arm-template.md
echo "Press [ENTER] to continue ..."
For a step-by-step tutorial that guides you through the process of creating an ARM template, see: > [!div class="nextstepaction"]
-> [ Tutorial: Create and deploy your first ARM template](../../azure-resource-manager/templates/template-tutorial-create-first-template.md)
+> [Tutorial: Create and deploy your first ARM template](../../azure-resource-manager/templates/template-tutorial-create-first-template.md)
networking Working Remotely Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/working-remotely-support.md
The Microsoft network is designed to meet the requirements and provide optimal p
Azure VPN gateway supports both Point-to-Site (P2S) and Site-to-Site (S2S) VPN connections. Using the Azure VPN gateway you can scale your employee's connections to securely access both your Azure deployed resources and your on-premises resources. For more information, see [How to enable users to work remotely](../vpn-gateway/work-remotely-support.md).
-If you are using Secure Sockets Tunneling Protocol (SSTP), the number of concurrent connections is limited to 128. To get a higher number of connections, we suggest transitioning to OpenVPN or IKEv2. For more information, see [Transition to OpenVPN protocol or IKEv2 from SSTP](../vpn-gateway/ikev2-openvpn-from-sstp.md
-).
+If you are using Secure Sockets Tunneling Protocol (SSTP), the number of concurrent connections is limited to 128. To get a higher number of connections, we suggest transitioning to OpenVPN or IKEv2. For more information, see [Transition to OpenVPN protocol or IKEv2 from SSTP](../vpn-gateway/ikev2-openvpn-from-sstp.md).
To access your resources deployed in Azure, remote developers could use Azure Bastion solution, instead of VPN connection to get secure shell access (RDP or SSH) without requiring public IPs on the VMs being accessed. For more information, see [Work remotely using Azure Bastion](../bastion/work-remotely-support.md).
object-anchors Model Conversion Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/object-anchors/model-conversion-error-codes.md
Title: Model Conversion Error Codes
-description: Model conversion error codes for the Azure Object Anchors service.
+ Title: Model conversion error codes
+description: Learn about model conversion error codes and exception errors in the Azure Object Anchors service, and how to address them.
Previously updated : 04/20/2021- Last updated : 06/10/2022+ + #Customer intent: Explain different modes of model conversion failure and how to recover from them.
-# Model Conversion Error Codes
+# Model conversion error codes
-For common modes of model conversion failure, the `Azure.MixedReality.ObjectAnchors.Conversion.AssetConversionProperties` object obtained from the `Value` field in the `Azure.MixedReality.ObjectAnchors.Conversion.AssetConversionOperation` contains an ErrorCode field of the `ConversionErrorCode` type. This type enumerates these common modes of failure for error message localization, failure recovery, and tips to the user on how the error can be corrected.
+For common modes of model conversion failure, the `Azure.MixedReality.ObjectAnchors.Conversion.AssetConversionProperties` object you get from the `Value` field in the `Azure.MixedReality.ObjectAnchors.Conversion.AssetConversionOperation` contains an `ErrorCode` field of the `ConversionErrorCode` type.
-| Error Code | Description | Mitigation |
+The `ConversionErrorCode` type enumerates the following common modes of model conversion failure. These enumerations are useful for error message localization, failure recovery, and tips to the user on how to correct the error.
+
+| Error code | Description | Mitigation |
| | | |
-| INVALID_ASSET_URI | The asset at the URI provided when starting the conversion job could not be found. | When triggering an asset conversion job, provide an upload URI obtained from the service where the asset to be converted has been uploaded. |
-| INVALID_JOB_ID | The provided ID for the asset conversion job to be created was set to the default all-zero GUID. | If a GUID is specified when creating an asset conversion job, ensure it is not the default all-zero GUID. |
+| INVALID_ASSET_URI | The asset at the URI provided when starting the conversion job couldn't be found. | When triggering an asset conversion job, provide an upload URI you get from the service where the asset to be converted is uploaded. |
+| INVALID_JOB_ID | The provided ID for the asset conversion job was set to the default all-zero GUID. | If a GUID is specified when creating an asset conversion job, make sure it isn't the default all-zero GUID. |
| INVALID_GRAVITY | The gravity vector provided when creating the asset conversion job was a fully zeroed vector. | When starting an asset conversion, provide the gravity vector that corresponds to the uploaded asset. |
-| INVALID_SCALE | The provided scale factor was not a positive non-zero value. | When starting an asset conversion, provide the scalar value that corresponds to the measurement unit scale (with regard to meters) of the uploaded asset. |
-| ASSET_SIZE_TOO_LARGE | The intermediate .PLY file generated from the asset or its serialized equivalent was too large. | Refer to the [asset size guidelines](faq.md) before submitting an asset for conversion to ensure conformity. |
-| ASSET_DIMENSIONS_OUT_OF_BOUNDS | The dimensions of the asset exceeded the physical dimension limit. This can be a sign of an improperly set scale for the asset when creating a job. | Inspect the `ScaledAssetDimensions` property in your `AssetConversionProperties` object: it will contain the actual dimensions of the asset that were calculated after applying scale (in meters). Then, refer to the [asset size guidelines](faq.md) before submitting an asset for conversion to ensure conformity, and ensure the provided scale corresponds to the uploaded asset. |
-| ZERO_FACES | The intermediate .PLY file generated from the asset was determined to have no faces, making it invalid for conversion. | Ensure the asset is a valid mesh. |
-| INVALID_FACE_VERTICES | The intermediate .PLY file generated from the asset contained faces that referenced nonexistent vertices. | Ensure the asset file is validly constructed. |
-| ZERO_TRAJECTORIES_GENERATED | The camera trajectories generated from the uploaded asset were empty. | Refer to the [asset guidelines](faq.md) before submitting an asset for conversion to ensure conformity. |
-| TOO_MANY_RIG_POSES | The number of rig poses in the intermediate .PLY file exceeded service limits. | Refer to the [asset size guidelines](faq.md) before submitting an asset for conversion to ensure conformity. |
-| SERVICE_ERROR | An unknown service error occurred. | Contact a member of the Object Anchors service team if the issue persists: https://github.com/Azure/azure-object-anchors/issues |
-| ASSET_CANNOT_BE_CONVERTED | The provided asset was corrupted, malformed, or otherwise unable to be converted in its provided format. | Ensure the asset is a validly constructed file of the specified type, and refer to the [asset size guidelines](faq.md) before submitting an asset for conversion to ensure conformity. |
-
-Any errors that occur outside the actual asset conversion jobs will be thrown as exceptions. Most notably, the `Azure.RequestFailedException` can be thrown for service calls that receive an unsuccessful (4xx or 5xx) or unexpected HTTP response code. For further details on these exceptions, examine the `Status`, `ErrorCode`, or `Message` fields on the exception.
+| INVALID_SCALE | The provided scale factor wasn't a positive non-zero value. | When starting an asset conversion, provide the scalar value that corresponds to the measurement unit scale, with respect to meters, of the uploaded asset. |
+| ASSET_SIZE_TOO_LARGE | The intermediate PLY file generated from the asset or its serialized equivalent was too large. | Ensure conformity with the [asset size guidelines](faq.md) before submitting an asset for conversion. |
+| ASSET_DIMENSIONS_OUT_OF_BOUNDS | The dimensions of the asset exceeded the physical dimension limit. This error can be a sign of an improperly set scale for the asset when creating a job. | Inspect the `ScaledAssetDimensions` property in your `AssetConversionProperties` object. This property contains the actual dimensions of the asset calculated after applying the scale in meters. Then, ensure conformity with the [asset size guidelines](faq.md) before submitting the asset for conversion. Make sure the provided scale corresponds to the uploaded asset. |
+| ZERO_FACES | The intermediate PLY file generated from the asset was determined to have no faces, making it invalid for conversion. | Ensure the asset is a valid mesh. |
+| INVALID_FACE_VERTICES | The intermediate PLY file generated from the asset contained faces that referenced nonexistent vertices. | Ensure the asset file is validly constructed. |
+| ZERO_TRAJECTORIES_GENERATED | The camera trajectories generated from the uploaded asset were empty. | Ensure conformity with the [asset size guidelines](faq.md) before submitting an asset for conversion. |
+| TOO_MANY_RIG_POSES | The number of rig poses in the intermediate PLY file exceeded service limits. | Ensure conformity with the [asset size guidelines](faq.md) before submitting an asset for conversion. |
+| SERVICE_ERROR | An unknown service error occurred. | [File a GitHub issue to the Object Anchors service team](https://github.com/Azure/azure-object-anchors/issues) if the issue persists. |
+| ASSET_CANNOT_BE_CONVERTED | The provided asset was corrupted, malformed, or otherwise couldn't be converted in its provided format. | Ensure the asset is a validly constructed file of the specified type. Ensure conformity with the [asset size guidelines](faq.md) before submitting the asset for conversion. |
+
+## Exception errors
+
+Any errors that occur outside the actual asset conversion jobs are thrown as exceptions. Most notably, the `Azure.RequestFailedException` can be thrown for service calls that receive an unsuccessful (4xx or 5xx) or unexpected HTTP response code. For further details on these exceptions, examine the `Status`, `ErrorCode`, or `Message` fields on the exception.
| Exception | Cause | | | |
-| ArgumentException | <ul><li>Occurs when using an invalidly constructed or all zero account ID to construct a request with the ObjectAnchorsConversionClient.</li><li>Occurs when attempting to initialize the ObjectAnchorsConversionClient using an invalid whitespace account domain.</li><li>Occurs when an unsupported service version is provided to the ObjectAnchorsConversionClient through ObjectAnchorsConversionClientOptions.</li></ul> |
-| ArgumentNullException | <ul><li>Occurs when attempting to initialize the ObjectAnchorsConversionClient using an invalid null account domain.</li><li>Occurs when attempting to initialize the ObjectAnchorsConversionClient using an invalid null credential.</li></ul> |
-| RequestFailedException | <ul><li>Occurs for all other issues resulting from a bad HTTP status code (unrelated to the status of a job that will/is/has run), such as an account not being found, an invalid upload uri being detected by the fronted, frontend service error, etc.</li></ul> |
-| UnsupportedAssetFileTypeException | <ul><li>Occurs when attempting to submit a job with an asset with an extension or specified filetype that is unsupported by the Azure Object Anchors Conversion service.</li></ul> |
+| ArgumentException | <ul><li>Using an invalidly constructed or all-zero account ID to construct a request with the `ObjectAnchorsConversionClient`.</li><li>Attempting to initialize the `ObjectAnchorsConversionClient` using an invalid whitespace account domain.</li><li>Providing an unsupported service version to the `ObjectAnchorsConversionClient` through `ObjectAnchorsConversionClientOptions`.</li></ul> |
+| ArgumentNullException | <ul><li>Attempting to initialize the `ObjectAnchorsConversionClient` using an invalid null account domain.</li><li>Attempting to initialize the `ObjectAnchorsConversionClient` using an invalid null credential.</li></ul> |
+| RequestFailedException | <ul><li>All other issues resulting from a bad HTTP status code, unrelated to job status. Examples include an account not being found, the front end detecting an invalid upload URI, or a front end service error.</li></ul> |
+| UnsupportedAssetFileTypeException | <ul><li>Submitting an asset with an extension or specified filetype that the Azure Object Anchors Conversion service doesn't support.</li></ul> |
+
+## Next steps
+
+- [Quickstart: Create an Object Anchors model from a 3D model](quickstarts/get-started-model-conversion.md)
+- [Frequently asked questions about Azure Object Anchors](faq.md)
+- [Azure Object Anchors client library for .NET](/dotnet/api/overview/azure/mixedreality.objectanchors.conversion-readme-pre)
openshift Howto Create Private Cluster 4X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-private-cluster-4x.md
Once you're logged into the OpenShift Web Console, click on the **?** on the top
![Image shows Azure Red Hat OpenShift login screen](media/aro4-download-cli.png)
-You can also download the latest release of the CLI appropriate to your machine from <https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/>.
+You can also download the [latest release of the CLI](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/) appropriate to your machine.
## Connect using the OpenShift CLI
openshift Tutorial Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/tutorial-connect-cluster.md
Once you're logged into the OpenShift Web Console, click on the **?** on the top
![Screenshot that highlights the Command Line Tools option in the list when you select the ? icon.](media/aro4-download-cli.png)
-You can also download the latest release of the CLI appropriate to your machine from <https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/>.
+You can also download the [latest release of the CLI](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/) appropriate to your machine.
If you're running the commands on the Azure Cloud Shell, download the latest OpenShift 4 CLI for Linux.
postgresql Concepts Compare Single Server Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compare-single-server-flexible-server.md
The following table provides a list of high-level features and capabilities comp
| Microsoft Defender for Cloud | Yes | No | | Resource health | Yes | Yes | | Service health | Yes | Yes |
-| Performance insights (iPerf) | Yes | Yes |
+| Performance insights (iPerf) | Yes | Yes. Not available in portal |
| Major version upgrades support | No | No | | Minor version upgrades | Yes. Automatic during maintenance window | Yes. Automatic during maintenance window |
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
* Support for choosing [standby availability zone](./how-to-manage-high-availability-portal.md) when deploying zone-redundant high availability. * Support for [extensions](concepts-extensions.md) PLV8, pgrouting with new servers<sup>$</sup> * Version updates for [extension](concepts-extensions.md) PostGIS.
+* General availability of Azure Database for PostgreSQL - Flexible Server in Canada East and Jio India West regions.
<sup>**$**</sup> New servers get these features automatically. In your existing servers, these features are enabled during your server's future maintenance window.
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
- Title: "Migrate from Azure Database for PostgreSQL Single Server to Flexible Server - Concepts"-
-description: Concepts about migrating your Single server to Azure database for PostgreSQL Flexible server.
---- Previously updated : 05/11/2022---
-# Migrate from Azure Database for PostgreSQL Single Server to Flexible Server (preview)
--
-Azure Database for PostgreSQL Flexible Server provides zone-redundant high availability, control over price, and control over maintenance windows. You can use the available migration tool to move your databases from Single Server to Flexible Server. To understand the differences between the two deployment options, see [this comparison chart](../flexible-server/concepts-compare-single-server-flexible-server.md).
-
-By using the migration tool, you can initiate migrations for multiple servers and databases in a repeatable way. The tool automates most of the migration steps to make the migration journey across Azure platforms as seamless as possible. The tool is free for customers.
-
->[!NOTE]
-> The migration tool is in private preview.
->
-> Migration from Single Server to Flexible Server is enabled in preview in these regions: Australia Southeast, Canada Central, Canada East, East Asia, North Central US, South Central US, Switzerland North, UAE North, UK South, UK West, West US, and Central US.
-
-## Overview
-
-The migration tool provides an inline experience to migrate databases from Single Server (source) to Flexible Server (target).
-
-You choose the source server and can select up to eight databases from it. This limitation is per migration task. The migration tool automates the following steps:
-
-1. Creates the migration infrastructure in the region of the target server.
-2. Creates a public IP address and attaches it to the migration infrastructure.
-3. Adds the migration infrastructure's IP address to the allowlist on the firewall rules of both the source and target servers.
-4. Creates a migration project with both source and target types as Azure Database for PostgreSQL.
-5. Creates a migration activity to migrate the databases specified by the user from the source to the target.
-6. Migrates schemas from the source to the target.
-7. Creates databases with the same name on the Flexible Server target.
-8. Migrates data from the source to the target.
-
-The following diagram shows the process flow for migration from Single Server to Flexible Server via the migration tool.
-
-
-The steps in the process are:
-
-1. Create a Flexible Server target.
-2. Invoke migration.
-3. Provision the migration infrastructure by using Azure Database Migration Service.
-4. Start the migration.
- 1. Initial dump/restore (online and offline)
- 1. Streaming the changes (online only)
-5. Cut over to the target.
-
-The migration tool is exposed through the Azure portal and through easy-to-use Azure CLI commands. It allows you to create migrations, list migrations, display migration details, modify the state of the migration, and delete migrations.
-
-## Comparison of migration modes
-
-The tool supports two modes for migration from Single Server to Flexible Server. The *online* option provides reduced downtime for the migration, with logical replication restrictions. The *offline* option offers a simple migration but might incur extended downtime, depending on the size of databases.
-
-The following table summarizes the differences between the migration modes.
-
-| Capability | Online | Offline |
-|:|:-|:--|
-| Database availability for reads during migration | Available | Available |
-| Database availability for writes during migration | Available | Generally not recommended, because any writes initiated after the migration are not captured or migrated |
-| Application suitability | Applications that need maximum uptime | Applications that can afford a planned downtime window |
-| Environment suitability | Production environments | Usually development environments, testing environments, and some production environments that can afford downtime |
-| Suitability for write-heavy workloads | Suitable but expected to reduce the workload during migration | Not applicable, because writes at the source after migration begins are not replicated to the target |
-| Manual cutover | Required | Not required |
-| Downtime required | Less | More |
-| Logical replication limitations | Applicable | Not applicable |
-| Migration time required | Depends on the database size and the write activity until cutover | Depends on the database size |
-
-Based on those differences, pick the mode that best works for your workloads.
-
-### Migration considerations for offline mode
-
-The migration process for offline mode entails a dump of the source Single Server database, followed by a restore at the Flexible Server target.
-
-The following table shows the approximate time for performing offline migrations for databases of various sizes.
-
->[!NOTE]
-> Add about 15 minutes for the migration infrastructure to be deployed for each migration task. Each task can migrate up to eight databases.
-
-| Database size | Approximate time taken (HH:MM) |
-|:|:-|
-| 1 GB | 00:01 |
-| 5 GB | 00:05 |
-| 10 GB | 00:10 |
-| 50 GB | 00:45 |
-| 100 GB | 06:00 |
-| 500 GB | 08:00 |
-| 1,000 GB | 09:30 |
-
-### Migration considerations for online mode
-
-The migration process for online mode entails a dump of the Single Server source database, a restore of that dump in the Flexible Server target, and then replication of ongoing changes. You capture change data by using logical decoding.
-
-The time for completing an online migration depends on the incoming writes to the source server. The higher the write workload is on the source, the more time it takes for the data to be replicated to Flexible Server.
-
-## Migration steps
-
-### Prerequisites
-
-Before you start using the migration tool:
--- [Create an Azure Database for PostgreSQL server](../flexible-server/quickstart-create-server-portal.md). --- [Enable logical replication](../single-server/concepts-logical.md) on the source server.-
- :::image type="content" source="./media/concepts-single-to-flexible/logical-replication-support.png" alt-text="Screenshot of logical replication support in the Azure portal." lightbox="./media/concepts-single-to-flexible/logical-replication-support.png":::
-
- >[!NOTE]
- > Enabling logical replication will require a server restart for the change to take effect.
--- [Set up an Azure Active Directory (Azure AD) app](./how-to-setup-azure-ad-app-portal.md). An Azure AD app is a critical component of the migration tool. It helps with role-based access control as the migration tool accesses both the source and target servers.-
-### Data and schema migration
-
-After you finish the prerequisites, migrate the data and schemas by using one of these methods:
--- [Migrate by using the Azure portal](../migrate/how-to-migrate-single-to-flexible-portal.md)-- [Migrate by using the Azure CLI](../migrate/how-to-migrate-single-to-flexible-cli.md)-
-### Post-migration considerations
--- All the resources that the migration tool creates will be automatically cleaned up, whether the migration succeeds, fails, or is canceled. No action is required from you.--- If your migration fails, you can create a new migration task with a different name and retry the operation.--- If you have more than eight databases on your Single Server source and you want to migrate them all, we recommend that you create multiple migration tasks. Each task can migrate up to eight databases.--- The migration does not move the database users and roles of the source server. You have to manually create these and apply them to the target server after migration.--- For security reasons, we highly recommended that you delete the Azure AD app after the migration finishes.--- After you validate your data and make your application point to Flexible Server, you can consider deleting your Single Server source.-
-## Limitations
-
-### Size
--- You can migrate databases of sizes up to 1 TB by using this tool. To migrate larger databases or heavy write workloads, contact your account team or [contact us](mailto:AskAzureDBforPGS2F@microsoft.com).--- In one migration attempt, you can migrate up to eight user databases from Single Server to Flexible Server. If you have more databases to migrate, you can create multiple migrations between the same Single Server source and Flexible Server target.-
-### Performance
--- The migration infrastructure is deployed on a four-vCore virtual machine that might limit migration performance. --- The deployment of migration infrastructure takes 10 to 15 minutes before the actual data migration starts, regardless of the size of data or the migration mode (online or offline).-
-### Replication
--- The migration tool uses a logical decoding feature of PostgreSQL to perform the online migration. The decoding feature has the following limitations. For more information about logical replication limitations, see the [PostgreSQL documentation](https://www.postgresql.org/docs/10/logical-replication-restrictions.html).
- - Data Definition Language (DDL) commands are not replicated.
- - Sequence data is not replicated.
- - Truncate commands are not replicated.
-
- To work around this limitation, use `DELETE` instead of `TRUNCATE`. To avoid accidental `TRUNCATE` invocations, you can revoke the `TRUNCATE` privilege from tables.
-
- - Views, materialized views, partition root tables, and foreign tables are not migrated.
--- Logical decoding will use resources in the Single Server source. Consider reducing the workload, or plan to scale CPU/memory resources at the Single Server source during the migration.-
-### Other limitations
--- The migration tool migrates only data and schemas of the Single Server databases to Flexible Server. It does not migrate other features, such as server parameters, connection security details, firewall rules, users, roles, and permissions. In other words, everything except data and schemas must be manually configured in the Flexible Server target.--- The migration tool does not validate the data in the Flexible Server target after migration. You must do this validation manually.--- The migration tool migrates only user databases, including Postgres databases. It doesn't migrate system or maintenance databases.--- If migration fails, there is no option to retry the same migration task. You have to create a new migration task with a unique name.--- The migration tool does not include an assessment of your Single Server source. -
-## Best practices
--- As part of discovery and assessment, take the server SKU, CPU usage, storage, database sizes, and extensions usage as some of the critical data to help with migrations.-- Plan the mode of migration for each database. For simpler migrations and smaller databases, consider offline mode.-- Batch similar-sized databases in a migration task. -- Perform large database migrations with one or two databases at a time to avoid source-side load and migration failures.-- Perform test migrations before migrating for production:
- - Test migrations are an important for ensuring that you cover all aspects of the database migration, including application testing.
-
- The best practice is to begin by running a migration entirely for testing purposes. After a newly started migration enters the continuous replication (CDC) phase with minimal lag, make your Flexible Server target the primary database server. Use that target for testing the application to ensure expected performance and results. If you're migrating to a higher Postgres version, test for application compatibility.
-
- - After testing is completed, you can migrate the production databases. At this point, you need to finalize the day and time of production migration. Ideally, there's low application use at this time. All stakeholders who need to be involved should be available and ready.
-
- The production migration requires close monitoring. For an online migration, the replication must be completed before you perform the cutover, to prevent data loss.
--- Cut over all dependent applications to access the new primary database, and open the applications for production usage.-- After the application starts running on the Flexible Server target, monitor the database performance closely to see if performance tuning is required.-
-## Next steps
--- [Migrate to Flexible Server by using the Azure portal](../migrate/how-to-migrate-single-to-flexible-portal.md)-- [Migrate to Flexible Server by using the Azure CLI](../migrate/how-to-migrate-single-to-flexible-cli.md)
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
- Title: "Migrate from Single Server to Flexible Server by using the Azure CLI"-
-description: Learn about migrating your Single Server databases to Azure database for PostgreSQL Flexible Server by using the Azure CLI.
---- Previously updated : 05/09/2022--
-# Migrate from Single Server to Flexible Server by using the Azure CLI
--
-This article shows you how to use the migration tool in the Azure CLI to migrate databases from Azure Database for PostgreSQL Single Server to Flexible Server.
-
->[!NOTE]
-> The migration tool is in private preview.
-
-## Prerequisites
-
-1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate the offerings.
-2. Register your subscription for Azure Database Migration Service. (If you've already done it, you can skip this step.)
-
- 1. On the Azure portal, go to your subscription.
-
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-cli-dms.png" alt-text="Screenshot of Azure Database Migration Service." lightbox="./media/concepts-single-to-flexible/single-to-flex-cli-dms.png":::
-
- 1. On the left menu, select **Resource Providers**. Search for **Microsoft.DataMigration**, and then select **Register**.
-
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-cli-dms-register.png" alt-text="Screenshot of the Register button for Azure Database Migration Service." lightbox="./media/concepts-single-to-flexible/single-to-flex-cli-dms-register.png":::
-
-3. Install the latest Azure CLI for your operating system from the [Azure CLI installation page](/cli/azure/install-azure-cli).
-
- If the Azure CLI is already installed, check the version by using the `az version` command. The version should be 2.28.0 or later to use the migration CLI commands. If not, [update your Azure CLI version](/cli/azure/update-azure-cli).
-4. Run the `az login` command:
-
- ```bash
- az login
- ```
-
- A browser window opens with the Azure sign-in page. Provide your Azure credentials to do a successful authentication. For other ways to sign with the Azure CLI, see [this article](/cli/azure/authenticate-azure-cli).
-5. Complete the prerequisites listed in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#prerequisites). You need them to get started with the migration tool.
-
-## Migration CLI commands
-
-The migration tool comes with easy-to-use CLI commands to do migration-related tasks. All the CLI commands start with `az postgres flexible-server migration`.
-
-For help with understanding the options associated with a command and with framing the right syntax, you can use the `help` parameter:
-
-```azurecli-interactive
-az postgres flexible-server migration --help
-```
-
-That command gives you the following output:
--
-The output lists the supported migration commands, along with their actions. Let's look at these commands in detail.
-
-### Create a migration
-
-The `create` command helps in creating a migration from a source server to a target server:
-
-```azurecli-interactive
-az postgres flexible-server migration create -- help
-```
-
-That command gives the following result:
--
-It calls out the expected arguments and has an example syntax for creating a successful migration from the source server to the target server. Here's the CLI command to create a migration:
-
-```azurecli
-az postgres flexible-server migration create [--subscription]
- [--resource-group]
- [--name]
- [--migration-name]
- [--properties]
-```
-
-| Parameter | Description |
-| - | - |
-|`subscription` | Subscription ID of the Flexible Server target. |
-|`resource-group` | Resource group of the Flexible Server target. |
-|`name` | Name of the Flexible Server target. |
-|`migration-name` | Unique identifier to migrations attempted to Flexible Server. This field accepts only alphanumeric characters and does not accept any special characters, except a hyphen (`-`). The name can't start with `-`, and no two migrations to a Flexible Server target can have the same name. |
-|`properties` | Absolute path to a JSON file that has the information about the Single Server source. |
-
-For example:
-
-```azurecli-interactive
-az postgres flexible-server migration create --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --properties "C:\Users\Administrator\Documents\migrationBody.JSON"
-```
-
-The `migration-name` argument used in the `create` command will be used in other CLI commands, such as `update`, `delete`, and `show.` In all those commands, it will uniquely identify the migration attempt in the corresponding actions.
-
-The migration tool offers online and offline modes of migration. To know more about the migration modes and their differences, see [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md).
-
-Create a migration between source and target servers by using the migration mode of your choice. The `create` command needs a JSON file to be passed as part of its `properties` argument.
-
-The structure of the JSON is:
-
-```bash
-{
-"properties": {
- "SourceDBServerResourceId":"subscriptions/<subscriptionid>/resourceGroups/<src_ rg_name>/providers/Microsoft.DBforPostgreSQL/servers/<source server name>",
-
-"SourceDBServerFullyQualifiedDomainName":ΓÇ»"fqdn of the source server as per the custom DNS server",
-"TargetDBServerFullyQualifiedDomainName":ΓÇ»"fqdn of the target server as per the custom DNS server"
-
-"SecretParameters": {
- "AdminCredentials":
- {
- "SourceServerPassword": "<password>",
- "TargetServerPassword": "<password>"
- },
-"AADApp":
- {
- "ClientId": "<client id>",
- "TenantId": "<tenant id>",
- "AadSecret": "<secret>"
- }
-},
-
-"MigrationResourceGroup":
- {
- "ResourceId":"subscriptions/<subscriptionid>/resourceGroups/<temp_rg_name>",
- "SubnetResourceId":"/subscriptions/<subscriptionid>/resourceGroups/<rg_name>/providers/Microsoft.Network/virtualNetworks/<Vnet_name>/subnets/<subnet_name>"
- },
-
-"DBsToMigrate":
- [
- "<db1>","<db2>"
- ],
-
-"SetupLogicalReplicationOnSourceDBIfNeeded":ΓÇ»"true",
-
-"OverwriteDBsInTarget":ΓÇ»"true"
-
-}
-
-}
-
-```
-
-Here are the `create` parameters:
-
-| Parameter | Type | Description |
-| - | - | - |
-| `SourceDBServerResourceId` | Required | This is the resource ID of the Single Server source and is mandatory. |
-| `SourceDBServerFullyQualifiedDomainName` | Optional | Use it when a custom DNS server is used for name resolution for a virtual network. Provide the FQDN of the Single Server source according to the custom DNS server for this property. |
-| `TargetDBServerFullyQualifiedDomainName` | Optional | Use it when a custom DNS server is used for name resolution inside a virtual network. Provide the FQDN of the Flexible Server target according to the custom DNS server. <br> `SourceDBServerFullyQualifiedDomainName` and `TargetDBServerFullyQualifiedDomainName` should be included as a part of the JSON only in the rare scenario of a custom DNS server being used for name resolution instead of Azure-provided DNS. Otherwise, don't include these parameters as a part of the JSON file. |
-| `SecretParameters` | Required | This parameter lists passwords for admin users for both the Single Server source and the Flexible Server target, along with the Azure Active Directory app credentials. These passwords help to authenticate against the source and target servers. They also help in checking proper authorization access to the resources.
-| `MigrationResourceGroup` | Optional | This section consists of two properties: <br><br> `ResourceID` (optional): The migration infrastructure and other network infrastructure components are created to migrate data and schemas from the source to the target. By default, all the components that this tool creates are provisioned under the resource group of the target server. If you want to deploy them under a different resource group, you can assign the resource ID of that resource group to this property. <br><br> `SubnetResourceID` (optional): If your source has public access turned off, or if your target server is deployed inside a virtual network, specify a subnet under which migration infrastructure needs to be created so that it can connect to both source and target servers. |
-| `DBsToMigrate` | Required | Specify the list of databases that you want to migrate to Flexible Server. You can include a maximum of eight database names at a time. |
-| `SetupLogicalReplicationOnSourceDBIfNeeded` | Optional | You can enable logical replication on the source server automatically by setting this property to `true`. This change in the server settings requires a server restart with a downtime of two to three minutes. |
-| `OverwriteDBsinTarget` | Optional | If the target server happens to have an existing database with the same name as the one you're trying to migrate, the migration will pause until you acknowledge that overwrites in the target databases are allowed. You can avoid this pause by setting the value of this property to `true`, which gives the migration tool permission to automatically overwrite databases. |
-
-### Choose a migration mode
-
-The default migration mode for migrations created through CLI commands is *online*. Filling out the preceding properties in your JSON file would create an online migration from your Single Server source to the Flexible Server target.
-
-If you want to migrate in offline mode, you need to add another property (`"TriggerCutover":"true"`) to your JSON file before you initiate the `create` command.
-
-### List migrations
-
-The `list` command shows the migration attempts that were made to a Flexible Server target. Here's the CLI command to list migrations:
-
-```azurecli
-az postgres flexible-server migration list [--subscription]
- [--resource-group]
- [--name]
- [--filter]
-```
-
-The `filter` parameter can take these values:
--- `Active`: Lists the current active migration attempts for the target server. It does not include the migrations that have reached a failed, canceled, or succeeded state.-- `All`: Lists all the migration attempts to the target server. This includes both the active and past migrations, regardless of the state.-
-For more information about this command, use the `help` parameter:
-
-```azurecli-interactive
-az postgres flexible-server migration list -- help
-```
-
-### Show details
-
-Use the following `list` command to get the details of a specific migration. These details include information on the current state and substate of the migration.
-
-```azurecli
-az postgres flexible-server migration list [--subscription]
- [--resource-group]
- [--name]
- [--migration-name]
-```
-
-The `migration_name` parameter is the name assigned to the migration during the `create` command. Here's a snapshot of the sample response from the CLI command for showing details:
--
-Note these important points for the command response:
--- As soon as the `create` command is triggered, the migration moves to the `InProgress` state and the `PerformingPreRequisiteSteps` substate. It takes up to 15 minutes for the migration workflow to deploy the migration infrastructure, configure firewall rules with source and target servers, and perform a few maintenance tasks. -- After the `PerformingPreRequisiteSteps` substate is completed, the migration moves to the substate of `Migrating Data`, where the dump and restore of the databases take place.-- Each database being migrated has its own section with all migration details, such as table count, incremental inserts, deletions, and pending bytes.-- The time that the `Migrating Data` substate takes to finish depends on the size of databases that are being migrated.-- For offline mode, the migration moves to the `Succeeded` state as soon as the `Migrating Data` substate finishes successfully. If there's a problem at the `Migrating Data` substate, the migration moves into a `Failed` state.-- For online mode, the migration moves to the state of `WaitingForUserAction` and a substate of `WaitingForCutoverTrigger` after the `Migrating Data` state finishes successfully. The next section covers the details of the `WaitingForUserAction` state.-
-For more information about this command, use the `help` parameter:
-
-```azurecli-interactive
- az postgres flexible-server migration show -- help
- ```
-
-### Update a migration
-
-As soon as the infrastructure setup is complete, the migration activity will pause. Messages in the response for the CLI command will show details if some prerequisites are missing or if the migration is at a state to perform a cutover. At this point, the migration goes into a state called `WaitingForUserAction`.
-
-You use the `update` command to set values for parameters, which helps the migration move to the next stage in the process. Let's look at each of the substates.
-
-#### WaitingForLogicalReplicationSetupRequestOnSourceDB
-
-If the logical replication is not set at the source server or if it was not included as a part of the JSON file, the migration will wait for logical replication to be enabled at the source. You can enable the logical replication setting manually by changing the replication flag to `Logical` on the portal. This change requires a server restart.
-
-You can also enable the logical replication setting by using the following CLI command:
-
-```azurecli
-az postgres flexible-server migration update [--subscription]
- [--resource-group]
- [--name]
- [--migration-name]
- [--initiate-data-migration]
-```
-
-To set logical replication on your source server, pass the value `true` to the `initiate-data-migration` property. For example:
-
-```azurecli-interactive
-az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --initiate-data-migration true"
-```
-
-If you enable it manually, *you still need to issue the preceding `update` command* for the migration to move out of the `WaitingForUserAction` state. The server doesn't need to restart again because that already happened via the portal action.
-
-#### WaitingForTargetDBOverwriteConfirmation
-
-`WaitingForTargetDBOverwriteConfirmation` is the state where migration is waiting for confirmation on target overwrite, because data is already present in the target server for the database that's being migrated. You can enable it by using the following CLI command:
-
-```azurecli
-az postgres flexible-server migration update [--subscription]
- [--resource-group]
- [--name]
- [--migration-name]
- [--overwrite-dbs]
-```
-
-To give the migration permissions to overwrite any existing data in the target server, you need to pass the value `true` to the `overwrite-dbs` property. For example:
-
-```azurecli-interactive
-az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --overwrite-dbs true"
-```
-
-#### WaitingForCutoverTrigger
-
-Migration gets to the `WaitingForCutoverTrigger` state when the dump and restore of the databases have finished and the ongoing writes at your Single Server source are being replicated to the Flexible Server target. You should wait for the replication to finish so that the target is in sync with the source.
-
-You can monitor the replication lag by using the response from the `show` command. A metric called **Pending Bytes** is associated with each database that's being migrated. This metric gives you an indication of the difference between the source and target databases in bytes. This number should be nearing zero over time. After the number reaches zero for all the databases, stop any further writes to your Single Server source. Then, validate the data and schema on your Flexible Server target to make sure they match exactly with the source server.
-
-After you complete the preceding steps, you can trigger a cutover by using the following CLI command:
-
-```azurecli
-az postgres flexible-server migration update [--subscription]
- [--resource-group]
- [--name]
- [--migration-name]
- [--cutover]
-```
-
-For example:
-
-```azurecli-interactive
-az postgres flexible-server migration update --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --cutover"
-```
-
-After you use the preceding command, use the command for showing details to monitor if the cutover has finished successfully. Upon successful cutover, migration will move to a `Succeeded` state. Update your application to point to the new Flexible Server target.
-
-For more information about this command, use the `help` parameter:
-
-```azurecli-interactive
- az postgres flexible-server migration update -- help
- ```
-
-### Delete or cancel a migration
-
-You can delete or cancel any ongoing migration attempts by using the `delete` command. This command stops all migration activities in that task, but it doesn't drop or roll back any changes on your target server. Here's the CLI command to delete a migration:
-
-```azurecli
-az postgres flexible-server migration delete [--subscription]
- [--resource-group]
- [--name]
- [--migration-name]
-```
-
-For example:
-
-```azurecli-interactive
-az postgres flexible-server migration delete --subscription 5c5037e5-d3f1-4e7b-b3a9-f6bf9asd2nkh0 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1"
-```
-
-For more information about this command, use the `help` parameter:
-
-```azurecli-interactive
- az postgres flexible-server migration delete -- help
- ```
-
-## Monitoring migration
-
-The `create` command starts a migration between the source and target servers. The migration goes through a set of states and substates before eventually moving into the `completed` state. The `show` command helps you monitor ongoing migrations, because it gives the current state and substate of the migration.
-
-The following tables describe the migration states and substates.
-
-| Migration state | Description |
-| - | - |
-| `InProgress` | The migration infrastructure is being set up, or the actual data migration is in progress. |
-| `Canceled` | The migration has been canceled or deleted. |
-| `Failed` | The migration has failed. |
-| `Succeeded` | The migration has succeeded and is complete. |
-| `WaitingForUserAction` | Migration is waiting on a user action. This state has a list of substates that were discussed in detail in the previous section. |
-
-| Migration substate | Description |
-| - | - |
-| `PerformingPreRequisiteSteps` | Infrastructure is being set up and is being prepped for data migration. |
-| `MigratingData` | Data is being migrated. |
-| `CompletingMigration` | Migration cutover is in progress. |
-| `WaitingForLogicalReplicationSetupRequestOnSourceDB` | Waiting for logical replication enablement. You can enable this substate manually or by using the `update` CLI command covered in the next section. |
-| `WaitingForCutoverTrigger` | Migration is ready for cutover. You can start the cutover when ready. |
-| `WaitingForTargetDBOverwriteConfirmation` | Waiting for confirmation on target overwrite. Data is present in the target server. <br> You can enable this substate via the `update` CLI command. |
-| `Completed` | Cutover was successful, and migration is complete. |
-
-## Custom DNS for name resolution
-
-To find out if custom DNS is used for name resolution, go to the virtual network where you deployed your source or target server, and then select **DNS server**. The virtual network should indicate if it's using a custom DNS server or the default Azure-provided DNS server.
--
-## Next steps
--- For a successful end-to-end migration, follow the post-migration steps in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md).
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-portal.md
- Title: "Migrate from Single Server to Flexible Server by using the Azure portal"-
-description: Learn about migrating your Single Server databases to Azure database for PostgreSQL Flexible Server by using the Azure portal.
---- Previously updated : 05/09/2022--
-# Migrate from Single Server to Flexible Server by using the Azure portal
--
-This article shows you how to use the migration tool in the Azure portal to migrate databases from Azure Database for PostgreSQL Single Server to Flexible Server.
-
->[!NOTE]
-> The migration tool is in private preview.
-
-## Prerequisites
-
-1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate the offerings.
-2. Register your subscription for Azure Database Migration Service:
-
- 1. On the Azure portal, go to your subscription.
-
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-azure-portal.png" alt-text="Screenshot of Azure portal subscription details." lightbox="./media/concepts-single-to-flexible/single-to-flex-azure-portal.png":::
-
- 1. On the left menu, select **Resource Providers**. Search for **Microsoft.DataMigration**, and then select **Register**.
-
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-register-data-migration.png" alt-text="Screenshot of the Register button for Azure Data Migration Service." lightbox="./media/concepts-single-to-flexible/single-to-flex-register-data-migration.png":::
-
-3. Complete the prerequisites listed in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#prerequisites). You need them to get started with the migration tool.
-
-## Configure the migration task
-
-The migration tool comes with a simple, wizard-based experience on the Azure portal. Here's how to start:
-
-1. Open your web browser and go to the [portal](https://portal.azure.com/). Enter your credentials to sign in. The default view is your service dashboard.
-
-2. Go to your Azure Database for PostgreSQL Flexible Server target. If you haven't created a Flexible Server target, [create one now](../flexible-server/quickstart-create-server-portal.md).
-
-3. On the **Overview** tab for Flexible Server, on the left menu, scroll down to **Migration (preview)** and select it.
-
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migration-preview.png" alt-text="Screenshot of Migration tab details." lightbox="./media/concepts-single-to-flexible/single-to-flex-migration-preview.png":::
-
-4. Select the **Migrate from Single Server** button to start a migration from Single Server to Flexible Server. If this is the first time you're using the migration tool, an empty grid appears with a prompt to begin your first migration.
-
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png" alt-text="Screenshot of the Migrate from Single Server tab." lightbox="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png":::
-
- If you've already created migrations to your Flexible Server target, the grid is populated with information about migrations that were attempted to this target from Single Server sources.
-
-5. Select the **Migrate from Single Server** button. You'll go through a wizard-based series of tabs to create a migration to this Flexible Server target from any Single Server source.
-
-### Setup tab
-
-The first tab is **Setup**. It has basic information about the migration and the list of prerequisites for getting started with migrations. These prerequisites are the same as the ones listed in the [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md) article.
--
-**Migration name** is the unique identifier for each migration to this Flexible Server target. This field accepts only alphanumeric characters and does not accept any special characters except a hyphen (-). The name can't start with a hyphen and should be unique for a target server. No two migrations to the same Flexible Server target can have the same name.
-
-**Migration resource group** is where the migration tool will create all the migration-related components. By default, the resource group of the Flexible Server target and all the components will be cleaned up automatically after the migration finishes. If you want to create a temporary resource group for the migration, create it and then select it from the dropdown list.
-
-For **Azure Active Directory App**, click the **select** option and choose the Azure Active Directory app that you created for the prerequisite step. Then, in the **Azure Active Directory Client Secret** box, paste the client secret that was generated for that app.
--
-Select the **Next** button.
-
-### Source tab
-
-The **Source** tab prompts you to give details related to the Single Server source that databases need to be migrated from.
--
-After you make the **Subscription** and **Resource Group** selections, the dropdown list for server names shows Single Server sources under that resource group across regions. Select the source that you want to migrate databases from. We recommend that you migrate databases from a Single Server source to a Flexible Server target in the same region.
-
-After you choose the Single Server source, the **Location**, **PostgreSQL version**, and **Server admin login name** boxes are automatically populated. The server admin login name is the admin username that was used to create the Single Server source. In the **Password** box, enter the password for that admin login name. It will enable the migration tool to log in to the Single Server source to initiate the dump and migration.
-
-Under **Choose databases to migrate**, there's a list of user databases inside the Single Server source. You can select up to eight databases that can be migrated in a single migration attempt. If there are more than eight user databases, create multiple migrations by using the same experience between the source and target servers.
-
-The final property on the **Source** tab is **Migration mode**. The migration tool offers online and offline modes of migration. The [concepts article](./concepts-single-to-flexible.md) talks more about the migration modes and their differences. After you choose the migration mode, the restrictions that are associated with that mode appear.
-
-When you're finished filling out all the fields, select the **Next** button.
-
-### Target tab
-
-The **Target** tab displays metadata for the Flexible Server target, like subscription name, resource group, server name, location, and PostgreSQL version.
--
-For **Server admin login name**, the tab displays the admin username that was used during the creation of the Flexible Server target. Enter the corresponding password for the admin user. This password is required for the migration tool to log in to the Flexible Server target and perform restore operations.
-
-For **Authorize DB overwrite**:
--- If you select **Yes**, you give this migration service permission to overwrite existing data if a database that's being migrated to Flexible Server is already present.-- If you select **No**, the migration service goes into a waiting state and asks you for permission to either overwrite the data or cancel the migration.-
-Select the **Next** button.
-
-### Networking tab
-
-The content on the **Networking** tab depends on the networking topology of your source and target servers. If both source and target servers are in public access, the following message appears.
--
-In this case, you don't need to do anything and can select the **Next** button.
-
-If either the source or target server is configured in private access, the content of the **Networking** tab is different.
--
-Let's try to understand what private access means for Single Server and Flexible Server:
--- **Single Server Private Access**: **Deny public network access** is set to **Yes**, and a private endpoint is configured.-- **Flexible Server Private Access**: A Flexible Server target is deployed inside a virtual network.-
-For private access, all the fields are automatically populated with subnet details. This is the subnet in which the migration tool will deploy Azure Database Migration Service to move data between the source and the target.
-
-You can use the suggested subnet or choose a different one. But make sure that the selected subnet can connect to both the source and target servers.
-
-After you choose a subnet, select the **Next** button.
-
-### Review + create tab
-
-The **Review + create** tab summarizes all the details for creating the migration. Review the details and select the **Create** button to start the migration.
--
-## Monitor migrations
-
-After you select the **Create** button, a notification appears in a few seconds to say that the migration was successfully created.
--
-You should automatically be redirected to the **Migration (Preview)** page of Flexible Server. That page has a new entry for the recently created migration.
--
-The grid that displays the migrations has these columns: **Name**, **Status**, **Source DB server**, **Resource group**, **Region**, **Version**, **Databases**, and **Start time**. By default, the grid shows the list of migrations in descending order of migration start times. In other words, recent migrations appear on top of the grid.
-
-You can use the refresh button to refresh the status of the migrations.
-
-You can also select the migration name in the grid to see the details of that migration.
--
-As soon as the migration is created, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes up to 10 minutes for the migration workflow to move out of this substate. The reason is that it takes time to create and deploy Database Migration Service, add the IP address on the firewall list of source and target servers, and perform maintenance tasks.
-
-After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** where the dump and restore of the databases take place. The time that the **Migrating Data** substate takes to finish depends on the size of databases that you're migrating.
-
-When you select each of the databases that are being migrated, a fan-out pane appears. It has all the migration details, such as table count, incremental inserts, deletes, and pending bytes.
-
-For offline mode, the migration moves to the **Succeeded** state as soon as the **Migrating Data** state finishes successfully. If there's an issue at the **Migrating Data** state, the migration moves into a **Failed** state.
-
-For online mode, the migration moves to the **WaitingForUserAction** state and the **WaitingForCutOver** substate after the **Migrating Data** substate finishes successfully.
--
-Select the migration name to open the migration details page. There, you should see the substate of **WaitingForCutover**.
--
-At this stage, the ongoing writes at your Single Server source are replicated to the Flexible Server target via the logical decoding feature of PostgreSQL. You should wait until the replication reaches a state where the target is almost in sync with the source.
-
-You can monitor the replication lag by selecting each database that's being migrated. That opens a fan-out pane with metrics. The value of the **Pending Bytes** metric should be nearing zero over time. After it reaches a few megabytes for all the databases, stop any further writes to your Single Server source and wait until the metric reaches 0. Then, validate the data and schemas on your Flexible Server target to make sure that they match exactly with the source server.
-
-After you complete the preceding steps, select the **Cutover** button. The following message appears.
--
-Select the **Yes** button to start cutover.
-
-A few seconds after you start cutover, the following notification appears.
--
-When the cutover is complete, the migration moves to the **Succeeded** state. Migration of schema data from your Single Server source to your Flexible Server target is now complete. You can use the refresh button on the page to check if the cutover was successful.
-
-After you complete the preceding steps, you can change your application code to point database connection strings to Flexible Server. You can then start using the target as the primary database server.
-
-Possible migration states include:
--- **InProgress**: The migration infrastructure is being set up, or the actual data migration is in progress.-- **Canceled**: The migration has been canceled or deleted.-- **Failed**: The migration has failed.-- **Succeeded**: The migration has succeeded and is complete.-- **WaitingForUserAction**: The migration is waiting for a user action.-
-Possible migration substates include:
--- **PerformingPreRequisiteSteps**: Infrastructure is being set up and is being prepped for data migration.-- **MigratingData**: Data is being migrated.-- **CompletingMigration**: Migration cutover is in progress.-- **WaitingForLogicalReplicationSetupRequestOnSourceDB**: Waiting for logical replication enablement.-- **WaitingForCutoverTrigger**: Migration is ready for cutover.-- **WaitingForTargetDBOverwriteConfirmation**: Waiting for confirmation on target overwrite. Data is present in the target server that you're migrating into.-- **Completed**: Cutover was successful, and migration is complete.-
-## Cancel migrations
-
-You have the option to cancel any ongoing migrations. For a migration to be canceled, it must be in the **InProgress** or **WaitingForUserAction** state. You can't cancel a migration that's in the **Succeeded** or **Failed** state.
-
-You can choose multiple ongoing migrations at once and cancel them.
--
-Canceling a migration stops further migration activity on your target server. It doesn't drop or roll back any changes on your target server from the migration attempts. Be sure to drop the databases involved in a canceled migration on your target server.
-
-## Next steps
-
-Follow the [post-migration steps](./concepts-single-to-flexible.md) for a successful end-to-end migration.
postgresql How To Setup Azure Ad App Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-setup-azure-ad-app-portal.md
- Title: "Set up an Azure AD app to use with Single Server to Flexible Server migration"-
-description: Learn about setting up an Azure AD app to be used with the feature that migrates from Single Server to Flexible Server.
---- Previously updated : 05/09/2022--
-# Set up an Azure AD app to use with migration from Single Server to Flexible Server
--
-This article shows you how to set up an [Azure Active Directory (Azure AD) app](../../active-directory/develop/howto-create-service-principal-portal.md) to use with a migration from Azure Database for PostgreSQL Single Server to Flexible Server.
-
-An Azure AD app helps with role-based access control (RBAC). The migration infrastructure requires access to both the source and target servers, and it's restricted by the roles assigned to the Azure AD app. After you create the Azure AD app, you can use it to manage multiple migrations.
-
-## Create an Azure AD app
-
-1. If you're new to Microsoft Azure, [create an account](https://azure.microsoft.com/free/) to evaluate the offerings.
-2. In the Azure portal, enter **Azure Active Directory** in the search box.
-3. On the page for Azure Active Directory, under **Manage** on the left, select **App registrations**.
-4. Select **New registration**.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-new-registration.png" alt-text="Screenshot that shows selections for creating a new registration for an Azure Active Directory app." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-new-registration.png":::
-
-5. Give the app registration a name, choose an option that suits your needs for account types, and then select **Register**.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-application-registration.png" alt-text="Screenshot that shows selections for naming and registering an Azure Active Directory app." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-application-registration.png":::
-
-6. After the app is created, copy the client ID and tenant ID and store them. You'll need them for later steps in the migration. Then, select **Add a certificate or secret**.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-secret-screen.png" alt-text="Screenshot that shows essential information about an Azure Active Directory app, along with the button for adding a certificate or secret." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-secret-screen.png":::
-
-7. For **Certificates & Secrets**, on the **Client secrets** tab, select **New client secret**.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-new-client-secret.png" alt-text="Screenshot that shows the button for creating a new client secret." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-new-client-secret.png":::
-
-8. On the fan-out pane, add a description, and then use the drop-down list to select the life span of your Azure AD app.
-
- After all the migrations are complete, you can delete the Azure AD app that you created for RBAC. The default option is **6 months**. If you don't need the Azure AD app for six months, select **3 months**. Then select **Add**.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-client-secret-description.png" alt-text="Screenshot that shows adding a description and selecting a life span for a client secret." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-client-secret-description.png":::
-
-9. In the **Value** column, copy the Azure AD app secret. You can copy the secret only during creation. If you miss this step, you'll need to delete the secret and create another one for future tries.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-client-secret-value.png" alt-text="Screenshot of copying a client secret." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-client-secret-value.png":::
-
-## Add contributor privileges to an Azure resource
-
-After you create the Azure AD app, you need to add contributor privileges for it to the following resources.
-
-| Resource | Type | Description |
-| - | - | - |
-| Single Server | Required | Single Server source that you're migrating from. |
-| Flexible Server | Required | Flexible Server target that you're migrating into. |
-| Azure resource group | Required | Resource group for the migration. By default, this is the resource group for the Flexible Server target. If you're using a temporary resource group to create the migration infrastructure, the Azure AD app will require contributor privileges to this resource group. |
-| Virtual network | Required (if used) | If the source or the target has private access, the Azure AD app will require contributor privileges to the corresponding virtual network. If you're using public access, you can skip this step. |
-
-The following steps add contributor privileges to a Flexible Server target. Repeat the steps for the Single Server source, resource group, and virtual network (if used).
-
-1. In the Azure portal, select the Flexible Server target. Then select **Access Control (IAM)** on the upper left.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-iam-screen.png" alt-text="Screenshot of the Access Control I A M page." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-iam-screen.png":::
-
-2. Select **Add** > **Add role assignment**.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-role-assignment.png" alt-text="Screenshot that shows selections for adding a role assignment." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-add-role-assignment.png":::
-
- > [!NOTE]
- > The **Add role assignment** capability is enabled only for users in the subscription who have a role type of **Owners**. Users who have other roles don't have permission to add role assignments.
-
-3. On the **Role** tab, select **Contributor** > **Next**.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-contributor-privileges.png" alt-text="Screenshot of the selections for choosing the contributor role." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-contributor-privileges.png":::
-
-4. On the **Members** tab, keep the default option of **User, group, or service principal** for **Assign access to**. Click **Select Members**, search for your Azure AD app, and then click **Select**.
-
- :::image type="content" source="./media/how-to-setup-azure-ad-app-portal/azure-ad-review-and-assign.png" alt-text="Screenshot of the Members tab." lightbox="./media/how-to-setup-azure-ad-app-portal/azure-ad-review-and-assign.png":::
-
-
-## Next steps
--- [Single Server to Flexible Server migration concepts](./concepts-single-to-flexible.md)-- [Migrate from Single Server to Flexible Server by using the Azure portal](./how-to-migrate-single-to-flexible-portal.md)-- [Migrate from Single Server to Flexible server by using the Azure CLI](./how-to-migrate-single-to-flexible-cli.md)
postgresql How To Optimize Query Stats Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-optimize-query-stats-collection.md
This article describes how to optimize query statistics collection on an Azure D
## Use pg_stats_statements
-**Pg_stat_statements** is a PostgreSQL extension that's enabled by default in Azure Database for PostgreSQL. The extension provides a means to track execution statistics for all SQL statements executed by a server. This module hooks into every query execution and comes with a non-trivial performance cost. Enabling **pg_stat_statements** forces query text writes to files on disk.
+**Pg_stat_statements** is a PostgreSQL extension that can be enabled in Azure Database for PostgreSQL. The extension provides a means to track execution statistics for all SQL statements executed by a server. This module hooks into every query execution and comes with a non-trivial performance cost. Enabling **pg_stat_statements** forces query text writes to files on disk.
If you have unique queries with long query text or you don't actively monitor **pg_stat_statements**, disable **pg_stat_statements** for best performance. To do so, change the setting to `pg_stat_statements.track = NONE`.
postgresql Quickstart Create Postgresql Server Database Using Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-postgresql-server-database-using-arm-template.md
read -p "Press [ENTER] to continue: "
For a step-by-step tutorial that guides you through the process of creating a template, see: > [!div class="nextstepaction"]
-> [ Tutorial: Create and deploy your first ARM template](../../azure-resource-manager/templates/template-tutorial-create-first-template.md)
+> [Tutorial: Create and deploy your first ARM template](../../azure-resource-manager/templates/template-tutorial-create-first-template.md)
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
Use any of the following deployment checklists during the setup or for troublesh
3. Microsoft Graph User.Read 1. Review network configuration and validate if: 1. A [private endpoint for Power BI tenant](/power-bi/enterprise/service-security-private-links) is deployed.
- 2. All required [private endpoints for Microsoft Purview](/catalog-private-link-end-to-end.md) are deployed.
+ 2. All required [private endpoints for Microsoft Purview](/azure/purview/catalog-private-link-end-to-end) are deployed.
3. Network connectivity from Self-hosted runtime to Power BI tenant is enabled through private network. 3. Network connectivity from Self-hosted runtime to Microsoft services is enabled through private network.
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
Previously updated : 05/20/2022 Last updated : 06/22/2022
The following table provides a brief description of each built-in role. Click th
> | [User Access Administrator](#user-access-administrator) | Lets you manage user access to Azure resources. | 18d7d88d-d35e-4fb5-a5c3-7773c20a72d9 | > | **Compute** | | | > | [Classic Virtual Machine Contributor](#classic-virtual-machine-contributor) | Lets you manage classic virtual machines, but not access to them, and not the virtual network or storage account they're connected to. | d73bb868-a0df-4d4d-bd69-98a00b01fccb |
+> | [Data Operator for Managed Disks](#data-operator-for-managed-disks) | Provides permissions to upload data to empty managed disks, read, or export data of managed disks (not attached to running VMs) and snapshots using SAS URIs and Azure AD authentication. | 959f8984-c045-4866-89c7-12bf9737be2e |
> | [Disk Backup Reader](#disk-backup-reader) | Provides permission to backup vault to perform disk backup. | 3e5e47e6-65f7-47ef-90b5-e5dd4d455f24 | > | [Disk Pool Operator](#disk-pool-operator) | Provide permission to StoragePool Resource Provider to manage disks added to a disk pool. | 60fc6e62-5479-42d4-8bf4-67625fcc2840 | > | [Disk Restore Operator](#disk-restore-operator) | Provides permission to backup vault to perform disk restore. | b50d9833-a0cb-478e-945f-707fcc997c13 |
Lets you manage classic virtual machines, but not access to them, and not the vi
} ```
+### Data Operator for Managed Disks
+
+Provides permissions to upload data to empty managed disks, read, or export data of managed disks (not attached to running VMs) and snapshots using SAS URIs and Azure AD authentication.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | *none* | |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/disks/download/action | Perform read data operations on Disk SAS Uri |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/disks/upload/action | Perform write data operations on Disk SAS Uri |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/snapshots/download/action | Perform read data operations on Snapshot SAS Uri |
+> | [Microsoft.Compute](resource-provider-operations.md#microsoftcompute)/snapshots/upload/action | Perform write data operations on Snapshot SAS Uri |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Provides permissions to upload data to empty managed disks, read, or export data of managed disks (not attached to running VMs) and snapshots using SAS URIs and Azure AD authentication.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/959f8984-c045-4866-89c7-12bf9737be2e",
+ "name": "959f8984-c045-4866-89c7-12bf9737be2e",
+ "permissions": [
+ {
+ "actions": [],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.Compute/disks/download/action",
+ "Microsoft.Compute/disks/upload/action",
+ "Microsoft.Compute/snapshots/download/action",
+ "Microsoft.Compute/snapshots/upload/action"
+ ],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Data Operator for Managed Disks",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### Disk Backup Reader Provides permission to backup vault to perform disk backup. [Learn more](../backup/disk-backup-faq.yml)
Allow read, write and delete access to Azure Spring Cloud Config Server [Learn m
> | **NotActions** | | > | *none* | | > | **DataActions** | |
-> | [Microsoft.AppPlatform](resource-provider-operations.md#microsoftappplatform)/Spring/configService/read | Read the configuration content(for example, application.yaml) for a specific Azure Spring Cloud service instance |
-> | [Microsoft.AppPlatform](resource-provider-operations.md#microsoftappplatform)/Spring/configService/write | Write config server content for a specific Azure Spring Cloud service instance |
-> | [Microsoft.AppPlatform](resource-provider-operations.md#microsoftappplatform)/Spring/configService/delete | Delete config server content for a specific Azure Spring Cloud service instance |
+> | [Microsoft.AppPlatform](resource-provider-operations.md#microsoftappplatform)/Spring/configService/read | Read the configuration content(for example, application.yaml) for a specific Azure Spring Apps service instance |
+> | [Microsoft.AppPlatform](resource-provider-operations.md#microsoftappplatform)/Spring/configService/write | Write config server content for a specific Azure Spring Apps service instance |
+> | [Microsoft.AppPlatform](resource-provider-operations.md#microsoftappplatform)/Spring/configService/delete | Delete config server content for a specific Azure Spring Apps service instance |
> | **NotDataActions** | | > | *none* | |
Allow read access to Azure Spring Cloud Config Server [Learn more](../spring-clo
> | **NotActions** | | > | *none* | | > | **DataActions** | |
-> | [Microsoft.AppPlatform](resource-provider-operations.md#microsoftappplatform)/Spring/configService/read | Read the configuration content(for example, application.yaml) for a specific Azure Spring Cloud service instance |
+> | [Microsoft.AppPlatform](resource-provider-operations.md#microsoftappplatform)/Spring/configService/read | Read the configuration content(for example, application.yaml) for a specific Azure Spring Apps service instance |
> | **NotDataActions** | | > | *none* | |
Allow read, write and delete access to Azure Spring Cloud Service Registry [Lear
> | **NotActions** | | > | *none* | | > | **DataActions** | |
-> | [Microsoft.AppPlatform](resource-provider-operations.md#microsoftappplatform)/Spring/eurekaService/read | Read the user app(s) registration information for a specific Azure Spring Cloud service instance |
-> | [Microsoft.AppPlatform](resource-provider-operations.md#microsoftappplatform)/Spring/eurekaService/write | Write the user app(s) registration information for a specific Azure Spring Cloud service instance |
-> | [Microsoft.AppPlatform](resource-provider-operations.md#microsoftappplatform)/Spring/eurekaService/delete | Delete the user app registration information for a specific Azure Spring Cloud service instance |
+> | [Microsoft.AppPlatform](resource-provider-operations.md#microsoftappplatform)/Spring/eurekaService/read | Read the user app(s) registration information for a specific Azure Spring Apps service instance |
+> | [Microsoft.AppPlatform](resource-provider-operations.md#microsoftappplatform)/Spring/eurekaService/write | Write the user app(s) registration information for a specific Azure Spring Apps service instance |
+> | [Microsoft.AppPlatform](resource-provider-operations.md#microsoftappplatform)/Spring/eurekaService/delete | Delete the user app registration information for a specific Azure Spring Apps service instance |
> | **NotDataActions** | | > | *none* | |
Allow read access to Azure Spring Cloud Service Registry [Learn more](../spring-
> | **NotActions** | | > | *none* | | > | **DataActions** | |
-> | [Microsoft.AppPlatform](resource-provider-operations.md#microsoftappplatform)/Spring/eurekaService/read | Read the user app(s) registration information for a specific Azure Spring Cloud service instance |
+> | [Microsoft.AppPlatform](resource-provider-operations.md#microsoftappplatform)/Spring/eurekaService/read | Read the user app(s) registration information for a specific Azure Spring Apps service instance |
> | **NotDataActions** | | > | *none* | |
Allows send access to event grid events.
> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments | > | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/topics/read | Read a topic | > | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/domains/read | Read a domain |
-> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/partnerNamespaces/read | |
+> | [Microsoft.EventGrid](resource-provider-operations.md#microsofteventgrid)/partnerNamespaces/read | Read a partner namespace |
> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. | > | **NotActions** | | > | *none* | |
Can manage Azure AD Domain Services and related network configurations [Learn mo
> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Incidents/Read | Read a classic metric alert incident | > | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/register/action | Register Domain Service | > | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/unregister/action | Unregister Domain Service |
-> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/read | Read Domain Services |
-> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/write | Write Domain Service |
-> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/delete | Delete Domain Service |
-> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for Domain Service |
-> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the Domain Service resource |
-> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for Domain Service |
-> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/oucontainer/read | Read Ou Containers |
-> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/oucontainer/write | Write Ou Container |
-> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/oucontainer/delete | Delete Ou Container |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/* | |
> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/register/action | Registers the subscription | > | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/unregister/action | Unregisters the subscription | > | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/read | Get the virtual network definition |
Can manage Azure AD Domain Services and related network configurations [Learn mo
"Microsoft.Insights/AlertRules/Incidents/Read", "Microsoft.AAD/register/action", "Microsoft.AAD/unregister/action",
- "Microsoft.AAD/domainServices/read",
- "Microsoft.AAD/domainServices/write",
- "Microsoft.AAD/domainServices/delete",
- "Microsoft.AAD/domainServices/providers/Microsoft.Insights/diagnosticSettings/read",
- "Microsoft.AAD/domainServices/providers/Microsoft.Insights/diagnosticSettings/write",
- "Microsoft.AAD/domainServices/providers/Microsoft.Insights/logDefinitions/read",
- "Microsoft.AAD/domainServices/oucontainer/read",
- "Microsoft.AAD/domainServices/oucontainer/write",
- "Microsoft.AAD/domainServices/oucontainer/delete",
+ "Microsoft.AAD/domainServices/*",
"Microsoft.Network/register/action", "Microsoft.Network/unregister/action", "Microsoft.Network/virtualNetworks/read",
Can view Azure AD Domain Services and related network configurations
> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Read | Read a classic metric alert | > | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/AlertRules/Incidents/Read | Read a classic metric alert incident |
-> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/read | Read Domain Services |
-> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/oucontainer/read | Read Ou Containers |
-> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/OutboundNetworkDependenciesEndpoints/read | Get the network endpoints of all outbound dependencies |
-> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for Domain Service |
-> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for Domain Service |
+> | [Microsoft.AAD](resource-provider-operations.md#microsoftaad)/domainServices/*/read | |
> | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/read | Get the virtual network definition | > | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/subnets/read | Gets a virtual network subnet definition | > | [Microsoft.Network](resource-provider-operations.md#microsoftnetwork)/virtualNetworks/virtualNetworkPeerings/read | Gets a virtual network peering definition |
Can view Azure AD Domain Services and related network configurations
"Microsoft.Resources/subscriptions/resourceGroups/read", "Microsoft.Insights/AlertRules/Read", "Microsoft.Insights/AlertRules/Incidents/Read",
- "Microsoft.AAD/domainServices/read",
- "Microsoft.AAD/domainServices/oucontainer/read",
- "Microsoft.AAD/domainServices/OutboundNetworkDependenciesEndpoints/read",
- "Microsoft.AAD/domainServices/providers/Microsoft.Insights/diagnosticSettings/read",
- "Microsoft.AAD/domainServices/providers/Microsoft.Insights/logDefinitions/read",
+ "Microsoft.AAD/domainServices/*/read",
"Microsoft.Network/virtualNetworks/read", "Microsoft.Network/virtualNetworks/subnets/read", "Microsoft.Network/virtualNetworks/virtualNetworkPeerings/read",
Lets you push assessments to Microsoft Defender for Cloud
"assignableScopes": [ "/" ],
- "description": "Lets you push assessments to Microsoft Defender for Cloud",
+ "description": "Lets you push assessments to Security Center",
"id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/612c2aa1-cb24-443b-ac28-3ab7272de6f5", "name": "612c2aa1-cb24-443b-ac28-3ab7272de6f5", "permissions": [
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Previously updated : 05/20/2022 Last updated : 06/22/2022
Azure service: core
> | Microsoft.Marketplace/privateStores/adminRequestApprovals/write | Admin update the request with decision on the request | > | Microsoft.Marketplace/privateStores/collections/approveAllItems/action | Delete all specific approved items and set collection to allItemsApproved | > | Microsoft.Marketplace/privateStores/collections/disableApproveAllItems/action | Set approve all items property to false for the collection |
+> | Microsoft.Marketplace/privateStores/collections/upsertOfferWithMultiContext/action | Upsert an offer with different contexts |
> | Microsoft.Marketplace/privateStores/offers/write | Creates offer in PrivateStore. | > | Microsoft.Marketplace/privateStores/offers/delete | Deletes offer from PrivateStore. | > | Microsoft.Marketplace/privateStores/offers/read | Reads PrivateStore offers. |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/locations/commitInternalAzureNetworkManagerConfiguration/action | Commits Internal AzureNetworkManager Configuration In ANM | > | Microsoft.Network/locations/internalAzureVirtualNetworkManagerOperation/action | Internal AzureVirtualNetworkManager Operation In ANM | > | Microsoft.Network/locations/setLoadBalancerFrontendPublicIpAddresses/action | SetLoadBalancerFrontendPublicIpAddresses targets frontend IP configurations of 2 load balancers. Azure Resource Manager IDs of the IP configurations are provided in the body of the request. |
+> | Microsoft.Network/locations/applicationgatewaywafdynamicmanifest/read | Get the application gateway waf dynamic manifest |
> | Microsoft.Network/locations/autoApprovedPrivateLinkServices/read | Gets Auto Approved Private Link Services | > | Microsoft.Network/locations/availableDelegations/read | Gets Available Delegations | > | Microsoft.Network/locations/availablePrivateEndpointTypes/read | Gets available Private Endpoint resources |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/networkWatchers/packetCaptures/read | Get the packet capture definition | > | Microsoft.Network/networkWatchers/packetCaptures/write | Creates a packet capture | > | Microsoft.Network/networkWatchers/packetCaptures/delete | Deletes a packet capture |
+> | Microsoft.Network/networkWatchers/packetCaptures/queryStatus/read | Read Packet Capture Status |
> | Microsoft.Network/networkWatchers/pingMeshes/start/action | Start PingMesh between specified VMs | > | Microsoft.Network/networkWatchers/pingMeshes/stop/action | Stop PingMesh between specified VMs | > | Microsoft.Network/networkWatchers/pingMeshes/read | Get PingMesh details |
Azure service: [Application Gateway](../application-gateway/index.yml), [Azure B
> | Microsoft.Network/virtualNetworks/BastionHosts/action | Gets Bastion Host references in a Virtual Network. | > | Microsoft.Network/virtualNetworks/listNetworkManagerEffectiveConnectivityConfigurations/action | List Network Manager Effective Connectivity Configurations | > | Microsoft.Network/virtualNetworks/listNetworkManagerEffectiveSecurityAdminRules/action | List Network Manager Effective Security Admin Rules |
+> | Microsoft.Network/virtualNetworks/listDnsResolvers/action | Gets the DNS Resolver for Virtual Network, in JSON format |
+> | Microsoft.Network/virtualNetworks/listDnsForwardingRulesets/action | Gets the DNS Forwarding Ruleset for Virtual Network, in JSON format |
> | Microsoft.Network/virtualNetworks/bastionHosts/default/action | Gets Bastion Host references in a Virtual Network. | > | Microsoft.Network/virtualNetworks/checkIpAddressAvailability/read | Check if Ip Address is available at the specified virtual network | > | Microsoft.Network/virtualNetworks/customViews/read | Get definition of a custom view of Virtual Network | > | Microsoft.Network/virtualNetworks/customViews/get/action | Get a Virtual Network custom view content |
-> | Microsoft.Network/virtualNetworks/listDnsForwardingRulesets/read | Gets the DNS Forwarding Ruleset for Virtual Network, in JSON format |
-> | Microsoft.Network/virtualNetworks/listDnsResolvers/read | Gets the DNS Resolver for Virtual Network, in JSON format |
> | Microsoft.Network/virtualNetworks/privateDnsZoneLinks/read | Get the Private DNS zone link to a virtual network properties, in JSON format. | > | Microsoft.Network/virtualNetworks/providers/Microsoft.Insights/diagnosticSettings/read | Get the diagnostic settings of Virtual Network | > | Microsoft.Network/virtualNetworks/providers/Microsoft.Insights/diagnosticSettings/write | Create or update the diagnostic settings of the Virtual Network |
Azure service: [Azure NetApp Files](../azure-netapp-files/index.yml)
> | Microsoft.NetApp/netAppAccounts/backupPolicies/read | Reads a backup policy resource. | > | Microsoft.NetApp/netAppAccounts/backupPolicies/write | Writes a backup policy resource. | > | Microsoft.NetApp/netAppAccounts/backupPolicies/delete | Deletes a backup policy resource. |
+> | Microsoft.NetApp/netAppAccounts/backupVaults/read | Reads a Backup Vault resource. |
+> | Microsoft.NetApp/netAppAccounts/backupVaults/write | Writes a Backup Vault resource. |
> | Microsoft.NetApp/netAppAccounts/capacityPools/read | Reads a pool resource. | > | Microsoft.NetApp/netAppAccounts/capacityPools/write | Writes a pool resource. | > | Microsoft.NetApp/netAppAccounts/capacityPools/delete | Deletes a pool resource. |
Azure service: [Azure NetApp Files](../azure-netapp-files/index.yml)
> | Microsoft.NetApp/netAppAccounts/volumeGroups/read | Reads a volume group resource. | > | Microsoft.NetApp/netAppAccounts/volumeGroups/write | Writes a volume group resource. | > | Microsoft.NetApp/netAppAccounts/volumeGroups/delete | Deletes a volume group resource. |
-> | Microsoft.NetApp/netAppIPSecPolicies/read | |
-> | Microsoft.NetApp/netAppIPSecPolicies/write | |
-> | Microsoft.NetApp/netAppIPSecPolicies/delete | |
+> | Microsoft.NetApp/netAppIPSecPolicies/read | Reads an IPSec policy resource. |
+> | Microsoft.NetApp/netAppIPSecPolicies/write | Writes an IPSec policy resource. |
+> | Microsoft.NetApp/netAppIPSecPolicies/delete | Deletes an IPSec policy resource. |
> | Microsoft.NetApp/netAppIPSecPolicies/Apply/action | | > | Microsoft.NetApp/Operations/read | Reads an operation resources. |
Azure service: [Storage](../storage/index.yml)
> | Microsoft.Storage/storageAccounts/updateInternalProperties/action | | > | Microsoft.Storage/storageAccounts/hnsonmigration/action | Customer is able to abort an ongoing Hns migration on the storage account | > | Microsoft.Storage/storageAccounts/hnsonmigration/action | Customer is able to migrate to hns account type |
+> | Microsoft.Storage/storageAccounts/privateEndpointConnections/action | |
> | Microsoft.Storage/storageAccounts/restoreBlobRanges/action | Restore blob ranges to the state of the specified time | > | Microsoft.Storage/storageAccounts/PrivateEndpointConnectionsApproval/action | Approve Private Endpoint Connections | > | Microsoft.Storage/storageAccounts/failover/action | Customer is able to control the failover in case of availability issues |
Azure service: [Storage](../storage/index.yml)
> | Microsoft.Storage/storageAccounts/managementPolicies/delete | Delete storage account management policies | > | Microsoft.Storage/storageAccounts/managementPolicies/read | Get storage management account policies | > | Microsoft.Storage/storageAccounts/managementPolicies/write | Put storage account management policies |
+> | Microsoft.Storage/storageAccounts/networkSecurityPerimeterAssociationProxies/delete | |
+> | Microsoft.Storage/storageAccounts/networkSecurityPerimeterAssociationProxies/read | |
+> | Microsoft.Storage/storageAccounts/networkSecurityPerimeterAssociationProxies/write | |
> | Microsoft.Storage/storageAccounts/objectReplicationPolicies/delete | Delete object replication policy | > | Microsoft.Storage/storageAccounts/objectReplicationPolicies/read | Get object replication policy | > | Microsoft.Storage/storageAccounts/objectReplicationPolicies/read | List object replication policies |
Azure service: [Storage](../storage/index.yml)
> | Microsoft.Storage/storageAccounts/restorePoints/delete | | > | Microsoft.Storage/storageAccounts/restorePoints/read | | > | Microsoft.Storage/storageAccounts/services/diagnosticSettings/write | Create/Update storage account diagnostic settings. |
+> | Microsoft.Storage/storageAccounts/storageTasks/delete | |
+> | Microsoft.Storage/storageAccounts/storageTasks/read | |
+> | Microsoft.Storage/storageAccounts/storageTasks/executionsummary/action | |
+> | Microsoft.Storage/storageAccounts/storageTasks/assignmentexecutionsummary/action | |
+> | Microsoft.Storage/storageAccounts/storageTasks/write | |
> | Microsoft.Storage/storageAccounts/tableServices/read | | > | Microsoft.Storage/storageAccounts/tableServices/read | Get Table service properties | > | Microsoft.Storage/storageAccounts/tableServices/write | |
Azure service: [Storage](../storage/index.yml)
> | Microsoft.Storage/storageAccounts/tableServices/tables/read | Query tables | > | Microsoft.Storage/storageAccounts/tableServices/tables/write | Create tables | > | Microsoft.Storage/storageAccounts/tableServices/tables/delete | Delete tables |
+> | Microsoft.Storage/storageTasks/read | |
+> | Microsoft.Storage/storageTasks/delete | |
+> | Microsoft.Storage/storageTasks/promote/action | |
+> | Microsoft.Storage/storageTasks/write | |
> | Microsoft.Storage/usages/read | Returns the limit and the current usage count for resources in the specified subscription | > | **DataAction** | **Description** | > | Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read | Returns a blob or a list of blobs |
Azure service: [Azure Spring Cloud](../spring-cloud/index.yml)
> | Microsoft.AppPlatform/locations/checkNameAvailability/action | Check resource name availability | > | Microsoft.AppPlatform/locations/operationResults/Spring/read | Read resource operation result | > | Microsoft.AppPlatform/locations/operationStatus/operationId/read | Read resource operation status |
-> | Microsoft.AppPlatform/operations/read | List available operations of Microsoft Azure Spring Cloud |
-> | Microsoft.AppPlatform/skus/read | List available skus of Microsoft Azure Spring Cloud |
-> | Microsoft.AppPlatform/Spring/write | Create or Update a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/delete | Delete a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/read | Get Azure Spring Cloud service instance(s) |
-> | Microsoft.AppPlatform/Spring/listTestKeys/action | List test keys for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/regenerateTestKey/action | Regenerate test key for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/disableTestEndpoint/action | Disable test endpoint functionality for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/enableTestEndpoint/action | Enable test endpoint functionality for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/stop/action | Stop a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/start/action | Start a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/configServers/action | Validate the config server settings for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/apiPortals/read | Get the API portal for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/apiPortals/write | Create or update the API portal for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/apiPortals/delete | Delete the API portal for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/apiPortals/validateDomain/action | Validate the API portal domain for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/apiPortals/domains/read | Get the API portal domain for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/apiPortals/domains/write | Create or update the API portal domain for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/apiPortals/domains/delete | Delete the API portal domain for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/apps/write | Create or update the application for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/apps/delete | Delete the application for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/apps/read | Get the applications for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/apps/getResourceUploadUrl/action | Get the resource upload URL of a specific Microsoft Azure Spring Cloud application |
+> | Microsoft.AppPlatform/operations/read | List available operations of Microsoft Azure Spring Apps |
+> | Microsoft.AppPlatform/skus/read | List available skus of Microsoft Azure Spring Apps |
+> | Microsoft.AppPlatform/Spring/write | Create or Update a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/delete | Delete a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/read | Get Azure Spring Apps service instance(s) |
+> | Microsoft.AppPlatform/Spring/listTestKeys/action | List test keys for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/regenerateTestKey/action | Regenerate test key for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/disableTestEndpoint/action | Disable test endpoint functionality for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/enableTestEndpoint/action | Enable test endpoint functionality for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/stop/action | Stop a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/start/action | Start a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/configServers/action | Validate the config server settings for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/read | Get the API portal for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/write | Create or update the API portal for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/delete | Delete the API portal for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/validateDomain/action | Validate the API portal domain for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/domains/read | Get the API portal domain for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/domains/write | Create or update the API portal domain for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/apiPortals/domains/delete | Delete the API portal domain for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/apps/write | Create or update the application for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/apps/delete | Delete the application for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/apps/read | Get the applications for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/apps/getResourceUploadUrl/action | Get the resource upload URL of a specific Microsoft Azure Spring Apps application |
> | Microsoft.AppPlatform/Spring/apps/validateDomain/action | Validate the custom domain for a specific application |
-> | Microsoft.AppPlatform/Spring/apps/setActiveDeployments/action | Set active deployments for a specific Microsoft Azure Spring Cloud application |
+> | Microsoft.AppPlatform/Spring/apps/setActiveDeployments/action | Set active deployments for a specific Microsoft Azure Spring Apps application |
> | Microsoft.AppPlatform/Spring/apps/bindings/write | Create or update the binding for a specific application | > | Microsoft.AppPlatform/Spring/apps/bindings/delete | Delete the binding for a specific application | > | Microsoft.AppPlatform/Spring/apps/bindings/read | Get the bindings for a specific application |
Azure service: [Azure Spring Cloud](../spring-cloud/index.yml)
> | Microsoft.AppPlatform/Spring/apps/deployments/start/action | Start the deployment for a specific application | > | Microsoft.AppPlatform/Spring/apps/deployments/stop/action | Stop the deployment for a specific application | > | Microsoft.AppPlatform/Spring/apps/deployments/restart/action | Restart the deployment for a specific application |
-> | Microsoft.AppPlatform/Spring/apps/deployments/getLogFileUrl/action | Get the log file URL of a specific Microsoft Azure Spring Cloud application deployment |
+> | Microsoft.AppPlatform/Spring/apps/deployments/getLogFileUrl/action | Get the log file URL of a specific Microsoft Azure Spring Apps application deployment |
> | Microsoft.AppPlatform/Spring/apps/deployments/generateHeapDump/action | Generate heap dump for a specific application | > | Microsoft.AppPlatform/Spring/apps/deployments/generateThreadDump/action | Generate thread dump for a specific application | > | Microsoft.AppPlatform/Spring/apps/deployments/startJFR/action | Start JFR for a specific application |
Azure service: [Azure Spring Cloud](../spring-cloud/index.yml)
> | Microsoft.AppPlatform/Spring/apps/domains/write | Create or update the custom domain for a specific application | > | Microsoft.AppPlatform/Spring/apps/domains/delete | Delete the custom domain for a specific application | > | Microsoft.AppPlatform/Spring/apps/domains/read | Get the custom domains for a specific application |
-> | Microsoft.AppPlatform/Spring/buildServices/read | Get the Build Services for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/buildServices/getResourceUploadUrl/action | Get the Upload URL of a specific Microsoft Azure Spring Cloud build |
-> | Microsoft.AppPlatform/Spring/buildServices/agentPools/read | Get the Agent Pools for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/buildServices/agentPools/write | Create or update the Agent Pools for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/buildServices/builders/read | Get the Builders for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/buildServices/builders/write | Create or update the Builders for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/buildServices/builders/delete | Delete the Builders for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/read | Get the BuildpackBinding for a specific Azure Spring Cloud service instance Builder |
-> | Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/write | Create or update the BuildpackBinding for a specific Azure Spring Cloud service instance Builder |
-> | Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/delete | Delete the BuildpackBinding for a specific Azure Spring Cloud service instance Builder |
-> | Microsoft.AppPlatform/Spring/buildServices/builds/read | Get the Builds for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/buildServices/builds/write | Create or update the Builds for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/buildServices/builds/results/read | Get the Build Results for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/buildServices/builds/results/getLogFileUrl/action | Get the Log File URL of a specific Microsoft Azure Spring Cloud build result |
-> | Microsoft.AppPlatform/Spring/buildServices/supportedBuildpacks/read | Get the Supported Buildpacks for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/buildServices/supportedStacks/read | Get the Supported Stacks for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/certificates/write | Create or update the certificate for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/certificates/delete | Delete the certificate for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/certificates/read | Get the certificates for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/configServers/read | Get the config server for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/configServers/write | Create or update the config server for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/configurationServices/read | Get the Application Configuration Services for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/configurationServices/write | Create or update the Application Configuration Service for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/configurationServices/delete | Delete the Application Configuration Service for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/read | Get the Build Services for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/getResourceUploadUrl/action | Get the Upload URL of a specific Microsoft Azure Spring Apps build |
+> | Microsoft.AppPlatform/Spring/buildServices/agentPools/read | Get the Agent Pools for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/agentPools/write | Create or update the Agent Pools for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/builders/read | Get the Builders for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/builders/write | Create or update the Builders for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/builders/delete | Delete the Builders for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/read | Get the BuildpackBinding for a specific Azure Spring Apps service instance Builder |
+> | Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/write | Create or update the BuildpackBinding for a specific Azure Spring Apps service instance Builder |
+> | Microsoft.AppPlatform/Spring/buildServices/builders/buildpackBindings/delete | Delete the BuildpackBinding for a specific Azure Spring Apps service instance Builder |
+> | Microsoft.AppPlatform/Spring/buildServices/builds/read | Get the Builds for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/builds/write | Create or update the Builds for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/builds/results/read | Get the Build Results for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/builds/results/getLogFileUrl/action | Get the Log File URL of a specific Microsoft Azure Spring Apps build result |
+> | Microsoft.AppPlatform/Spring/buildServices/supportedBuildpacks/read | Get the Supported Buildpacks for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/buildServices/supportedStacks/read | Get the Supported Stacks for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/certificates/write | Create or update the certificate for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/certificates/delete | Delete the certificate for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/certificates/read | Get the certificates for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/configServers/read | Get the config server for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/configServers/write | Create or update the config server for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/configurationServices/read | Get the Application Configuration Services for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/configurationServices/write | Create or update the Application Configuration Service for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/configurationServices/delete | Delete the Application Configuration Service for a specific Azure Spring Apps service instance |
> | Microsoft.AppPlatform/Spring/configurationServices/validate/action | Validate the settings for a specific Application Configuration Service |
-> | Microsoft.AppPlatform/Spring/deployments/read | Get the deployments for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/detectors/read | Get the detectors for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/gateways/read | Get the Spring Cloud Gateways for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/gateways/write | Create or update the Spring Cloud Gateway for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/gateways/delete | Delete the Spring Cloud Gateway for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/gateways/validateDomain/action | Validate the Spring Cloud Gateway domain for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/gateways/domains/read | Get the Spring Cloud Gateways domain for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/gateways/domains/write | Create or update the Spring Cloud Gateway domain for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/gateways/domains/delete | Delete the Spring Cloud Gateway domain for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/gateways/routeConfigs/read | Get the Spring Cloud Gateway route config for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/gateways/routeConfigs/write | Create or update the Spring Cloud Gateway route config for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/gateways/routeConfigs/delete | Delete the Spring Cloud Gateway route config for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/monitoringSettings/read | Get the monitoring setting for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/monitoringSettings/write | Create or update the monitoring setting for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/providers/Microsoft.Insights/diagnosticSettings/read | Get the diagnostic settings for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/providers/Microsoft.Insights/diagnosticSettings/write | Create or update the diagnostic settings for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/providers/Microsoft.Insights/logDefinitions/read | Get definitions of logs from Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/providers/Microsoft.Insights/metricDefinitions/read | Get definitions of metrics from Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/serviceRegistries/read | Get the Service Registrys for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/serviceRegistries/write | Create or update the Service Registry for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/serviceRegistries/delete | Delete the Service Registry for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/storages/write | Create or update the storage for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/storages/delete | Delete the storage for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/storages/read | Get storage for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/deployments/read | Get the deployments for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/detectors/read | Get the detectors for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/gateways/read | Get the Spring Cloud Gateways for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/gateways/write | Create or update the Spring Cloud Gateway for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/gateways/delete | Delete the Spring Cloud Gateway for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/gateways/validateDomain/action | Validate the Spring Cloud Gateway domain for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/gateways/domains/read | Get the Spring Cloud Gateways domain for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/gateways/domains/write | Create or update the Spring Cloud Gateway domain for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/gateways/domains/delete | Delete the Spring Cloud Gateway domain for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/gateways/routeConfigs/read | Get the Spring Cloud Gateway route config for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/gateways/routeConfigs/write | Create or update the Spring Cloud Gateway route config for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/gateways/routeConfigs/delete | Delete the Spring Cloud Gateway route config for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/monitoringSettings/read | Get the monitoring setting for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/monitoringSettings/write | Create or update the monitoring setting for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/providers/Microsoft.Insights/diagnosticSettings/read | Get the diagnostic settings for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/providers/Microsoft.Insights/diagnosticSettings/write | Create or update the diagnostic settings for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/providers/Microsoft.Insights/logDefinitions/read | Get definitions of logs from Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/providers/Microsoft.Insights/metricDefinitions/read | Get definitions of metrics from Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/serviceRegistries/read | Get the Service Registrys for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/serviceRegistries/write | Create or update the Service Registry for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/serviceRegistries/delete | Delete the Service Registry for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/storages/write | Create or update the storage for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/storages/delete | Delete the storage for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/storages/read | Get storage for a specific Azure Spring Apps service instance |
> | **DataAction** | **Description** |
-> | Microsoft.AppPlatform/Spring/configService/read | Read the configuration content(for example, application.yaml) for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/configService/write | Write config server content for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/configService/delete | Delete config server content for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/eurekaService/read | Read the user app(s) registration information for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/eurekaService/write | Write the user app(s) registration information for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/eurekaService/delete | Delete the user app registration information for a specific Azure Spring Cloud service instance |
-> | Microsoft.AppPlatform/Spring/logstreamService/read | Read the streaming log of user app for a specific Azure Spring Cloud service instance |
+> | Microsoft.AppPlatform/Spring/configService/read | Read the configuration content(for example, application.yaml) for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/configService/write | Write config server content for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/configService/delete | Delete config server content for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/eurekaService/read | Read the user app(s) registration information for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/eurekaService/write | Write the user app(s) registration information for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/eurekaService/delete | Delete the user app registration information for a specific Azure Spring Apps service instance |
+> | Microsoft.AppPlatform/Spring/logstreamService/read | Read the streaming log of user app for a specific Azure Spring Apps service instance |
### Microsoft.CertificateRegistration
Azure service: [Media Services](/media-services/)
> | Microsoft.Media/unregister/action | Unregisters the subscription for the Media Services resource provider | > | Microsoft.Media/checknameavailability/action | Checks if a Media Services account name is available | > | Microsoft.Media/locations/checkNameAvailability/action | Checks if a Media Services account name is available |
-> | Microsoft.Media/locations/mediaServiceOperationResults/read | Read any Media Services Operation Result |
-> | Microsoft.Media/locations/mediaserviceOperationStatuses/read | Read Any Media Service Operation Status |
+> | Microsoft.Media/locations/mediaServicesOperationResults/read | Read any Media Services Operation Result |
+> | Microsoft.Media/locations/mediaServicesOperationStatuses/read | Read Any Media Service Operation Status |
> | Microsoft.Media/locations/videoAnalyzerOperationResults/read | Read any Video Analyzer Operation Result | > | Microsoft.Media/locations/videoAnalyzerOperationStatuses/read | Read any Video Analyzer Operation Status | > | Microsoft.Media/mediaservices/read | Read any Media Services Account |
Azure service: [Container Instances](../container-instances/index.yml)
> | Microsoft.ContainerInstance/containerGroupProfiles/read | Get all container goup profiles. | > | Microsoft.ContainerInstance/containerGroupProfiles/write | Create or update a specific container group profile. | > | Microsoft.ContainerInstance/containerGroupProfiles/delete | Delete the specific container group profile. |
+> | Microsoft.ContainerInstance/containerGroupProfiles/revisions/read | Get container group profile revisions |
+> | Microsoft.ContainerInstance/containerGroupProfiles/revisions/deregister/action | Deregister container group profile revision. |
> | Microsoft.ContainerInstance/containerGroups/read | Get all container groups. | > | Microsoft.ContainerInstance/containerGroups/write | Create or update a specific container group. | > | Microsoft.ContainerInstance/containerGroups/delete | Delete the specific container group. |
Azure service: [Azure Kubernetes Service (AKS)](../aks/index.yml)
> | Microsoft.ContainerService/managedClusters/listClusterUserCredential/action | List the clusterUser credential of a managed cluster | > | Microsoft.ContainerService/managedClusters/listClusterMonitoringUserCredential/action | List the clusterMonitoringUser credential of a managed cluster | > | Microsoft.ContainerService/managedClusters/resetServicePrincipalProfile/action | Reset the service principal profile of a managed cluster |
+> | Microsoft.ContainerService/managedClusters/unpinManagedCluster/action | Reset the service principal profile of a managed cluster |
> | Microsoft.ContainerService/managedClusters/resolvePrivateLinkServiceId/action | Resolve the private link service id of a managed cluster | > | Microsoft.ContainerService/managedClusters/resetAADProfile/action | Reset the AAD profile of a managed cluster | > | Microsoft.ContainerService/managedClusters/rotateClusterCertificates/action | Rotate certificates of a managed cluster |
Azure service: [Data Factory](../data-factory/index.yml)
> | Microsoft.DataFactory/factories/dataflows/read | Reads Data Flow. | > | Microsoft.DataFactory/factories/dataflows/delete | Deletes Data Flow. | > | Microsoft.DataFactory/factories/dataflows/write | Create or update Data Flow |
+> | Microsoft.DataFactory/factories/dataMappers/read | Reads Data Mapping. |
+> | Microsoft.DataFactory/factories/dataMappers/delete | Deletes Data Mapping. |
+> | Microsoft.DataFactory/factories/dataMappers/write | Create or update Data Mapping |
> | Microsoft.DataFactory/factories/datasets/read | Reads any Dataset. | > | Microsoft.DataFactory/factories/datasets/delete | Deletes any Dataset. | > | Microsoft.DataFactory/factories/datasets/write | Creates or Updates any Dataset. |
Azure service: [Azure Database for PostgreSQL](../postgresql/index.yml)
> | Microsoft.DBforPostgreSQL/flexibleServers/providers/Microsoft.Insights/metricDefinitions/read | Return types of metrics that are available for databases | > | Microsoft.DBforPostgreSQL/flexibleServers/queryStatistics/read | | > | Microsoft.DBforPostgreSQL/flexibleServers/queryTexts/read | |
+> | Microsoft.DBforPostgreSQL/flexibleServers/replicas/read | |
> | Microsoft.DBforPostgreSQL/flexibleServers/topQueryStatistics/read | | > | Microsoft.DBforPostgreSQL/locations/administratorAzureAsyncOperation/read | Gets in-progress operations on PostgreSQL server administrators | > | Microsoft.DBforPostgreSQL/locations/administratorOperationResults/read | Return PostgreSQL Server administrator operation results |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | | | > | Microsoft.Sql/checkNameAvailability/action | Verify whether given server name is available for provisioning worldwide for a given subscription. | > | Microsoft.Sql/register/action | Registers the subscription for the Microsoft SQL Database resource provider and enables the creation of Microsoft SQL Databases. |
-> | Microsoft.Sql/unregister/action | UnRegisters the subscription for the Microsoft SQL Database resource provider and disables the creation of Microsoft SQL Databases. |
+> | Microsoft.Sql/unregister/action | Unregisters the subscription for the Azure SQL Database resource provider and disables the creation of Azure SQL Databases. |
> | Microsoft.Sql/privateEndpointConnectionsApproval/action | Determines if user is allowed to approve a private endpoint connection | > | Microsoft.Sql/instancePools/read | Gets an instance pool | > | Microsoft.Sql/instancePools/write | Creates or updates an instance pool |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | Microsoft.Sql/locations/managedDatabaseRestoreAzureAsyncOperation/completeRestore/action | Completes managed database restore operation | > | Microsoft.Sql/locations/managedInstanceAdvancedThreatProtectionAzureAsyncOperation/read | Retrieve results of the managed instance Advanced Threat Protection settings write operation | > | Microsoft.Sql/locations/managedInstanceAdvancedThreatProtectionOperationResults/read | Retrieve results of the managed instance Advanced Threat Protection settings write operation |
+> | Microsoft.Sql/locations/managedInstanceDtcAzureAsyncOperation/read | Gets the status of Azure SQL Managed Instance DTC Azure async operation. |
> | Microsoft.Sql/locations/managedInstanceEncryptionProtectorAzureAsyncOperation/read | Gets in-progress operations on transparent data encryption managed instance encryption protector | > | Microsoft.Sql/locations/managedInstanceEncryptionProtectorOperationResults/read | Gets in-progress operations on transparent data encryption managed instance encryption protector | > | Microsoft.Sql/locations/managedInstanceKeyAzureAsyncOperation/read | Gets in-progress operations on transparent data encryption managed instance keys |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | Microsoft.Sql/locations/networkSecurityPerimeterAssociationProxyAzureAsyncOperation/read | Get network security perimeter proxy azure async operation | > | Microsoft.Sql/locations/networkSecurityPerimeterAssociationProxyOperationResults/read | Get network security perimeter operation result | > | Microsoft.Sql/locations/networkSecurityPerimeterUpdatesAvailableAzureAsyncOperation/read | Get network security perimeter updates available azure async operation |
-> | Microsoft.Sql/locations/operationsHealth/read | Gets health status of the service operation in a location |
> | Microsoft.Sql/locations/privateEndpointConnectionAzureAsyncOperation/read | Gets the result for a private endpoint connection operation | > | Microsoft.Sql/locations/privateEndpointConnectionOperationResults/read | Gets the result for a private endpoint connection operation | > | Microsoft.Sql/locations/privateEndpointConnectionProxyAzureAsyncOperation/read | Gets the result for a private endpoint connection proxy operation |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | Microsoft.Sql/locations/serverTrustGroups/delete | Deletes the existing SQL Server Trust Group | > | Microsoft.Sql/locations/serverTrustGroups/read | Returns the existing SQL Server Trust Groups | > | Microsoft.Sql/locations/shortTermRetentionPolicyOperationResults/read | Gets the status of a short term retention policy operation |
+> | Microsoft.Sql/locations/sqlVulnerabilityAssessmentAzureAsyncOperation/read | Get a sql database vulnerability assessment scan azure async operation. |
+> | Microsoft.Sql/locations/sqlVulnerabilityAssessmentOperationResults/read | Get a sql database vulnerability assessment scan operation results. |
> | Microsoft.Sql/locations/startManagedInstanceAzureAsyncOperation/read | Gets Azure SQL Managed Instance Start Azure async operation. | > | Microsoft.Sql/locations/startManagedInstanceOperationResults/read | Gets Azure SQL Managed Instance Start operation result. | > | Microsoft.Sql/locations/stopManagedInstanceAzureAsyncOperation/read | Gets Azure SQL Managed Instance Stop Azure async operation. |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | Microsoft.Sql/managedInstances/dnsAliases/write | Creates an Azure SQL Managed Instance Dns Alias with the specified parameters or updates the properties for the specified Azure SQL Managed Instance Dns Alias. | > | Microsoft.Sql/managedInstances/dnsAliases/delete | Deletes an existing Azure SQL Managed Instance Dns Alias. | > | Microsoft.Sql/managedInstances/dnsAliases/acquire/action | Acquire Azure SQL Managed Instance Dns Alias from another Managed Instance. |
+> | Microsoft.Sql/managedInstances/dtc/read | Gets properties for the specified Azure SQL Managed Instance DTC. |
+> | Microsoft.Sql/managedInstances/dtc/write | Updates Azure SQL Managed Instance's DTC properties for the specified instance. |
> | Microsoft.Sql/managedInstances/encryptionProtector/revalidate/action | Update the properties for the specified Server Encryption Protector. | > | Microsoft.Sql/managedInstances/encryptionProtector/read | Returns a list of server encryption protectors or gets the properties for the specified server encryption protector. | > | Microsoft.Sql/managedInstances/encryptionProtector/write | Update the properties for the specified Server Encryption Protector. |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | Microsoft.Sql/servers/databases/dataMaskingPolicies/write | Change data masking policy for a given database | > | Microsoft.Sql/servers/databases/dataMaskingPolicies/rules/read | Retrieve details of the data masking policy rule configured on a given database | > | Microsoft.Sql/servers/databases/dataMaskingPolicies/rules/write | Change data masking policy rule for a given database |
-> | Microsoft.Sql/servers/databases/dataMaskingPolicies/rules/delete | Delete data masking policy rule for a given database |
> | Microsoft.Sql/servers/databases/dataWarehouseQueries/read | Returns the data warehouse distribution query information for selected query ID | > | Microsoft.Sql/servers/databases/dataWarehouseQueries/dataWarehouseQuerySteps/read | Returns the distributed query step information of data warehouse query for selected step ID | > | Microsoft.Sql/servers/databases/dataWarehouseUserActivities/read | Retrieves the user activities of a SQL Data Warehouse instance which includes running and suspended queries |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | Microsoft.Sql/servers/databases/sensitivityLabels/read | List sensitivity labels of a given database | > | Microsoft.Sql/servers/databases/serviceTierAdvisors/read | Return suggestion about scaling database up or down based on query execution statistics to improve performance or reduce cost | > | Microsoft.Sql/servers/databases/skus/read | Gets a collection of skus available for a database |
+> | Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/read | Retrieve SQL Vulnerability Assessment policies on a given database |
+> | Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/initiateScan/action | Execute vulnerability assessment database scan. |
+> | Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/baselines/write | Change the sql vulnerability assessment baseline set for a given database |
+> | Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/baselines/read | List the Sql Vulnerability Assessment baseline set by Sql Vulnerability Assessments |
+> | Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/baselines/rules/delete | Remove the sql vulnerability assessment rule baseline for a given database |
+> | Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/baselines/rules/write | Change the sql vulnerability assessment rule baseline for a given database |
+> | Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/baselines/rules/read | Get the sql vulnerability assessment rule baseline list for a given database |
+> | Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/scans/read | Retrieve the scan record of the database SQL vulnerability assessment scan |
+> | Microsoft.Sql/servers/databases/sqlVulnerabilityAssessments/scans/scanResults/read | Retrieve the scan results of the database SQL vulnerability assessment scan |
> | Microsoft.Sql/servers/databases/syncGroups/refreshHubSchema/action | Refresh sync hub database schema | > | Microsoft.Sql/servers/databases/syncGroups/cancelSync/action | Cancel sync group synchronization | > | Microsoft.Sql/servers/databases/syncGroups/triggerSync/action | Trigger sync group synchronization |
Azure service: [Azure SQL Database](/azure/azure-sql/database/index), [Azure SQL
> | Microsoft.Sql/servers/providers/Microsoft.Insights/metricDefinitions/read | Return types of metrics that are available for servers | > | Microsoft.Sql/servers/recommendedElasticPools/read | Retrieve recommendation for elastic database pools to reduce cost or improve performance based on historical resource utilization | > | Microsoft.Sql/servers/recommendedElasticPools/databases/read | Retrieve metrics for recommended elastic database pools for a given server |
-> | Microsoft.Sql/servers/recoverableDatabases/read | This operation is used for disaster recovery of live database to restore database to last-known good backup point. It returns information about the last good backup but it doesn\u0027t actually restore the database. |
+> | Microsoft.Sql/servers/recoverableDatabases/read | Return the list of recoverable databases or gets the properties for the specified recoverable database. |
> | Microsoft.Sql/servers/replicationLinks/read | Return the list of replication links or gets the properties for the specified replication links. | > | Microsoft.Sql/servers/restorableDroppedDatabases/read | Get a list of databases that were dropped on a given server that are still within retention policy. | > | Microsoft.Sql/servers/securityAlertPolicies/write | Change the server threat detection policy for a given server | > | Microsoft.Sql/servers/securityAlertPolicies/read | Retrieve a list of server threat detection policies configured for a given server | > | Microsoft.Sql/servers/securityAlertPolicies/operationResults/read | Retrieve results of the server threat detection policy write operation | > | Microsoft.Sql/servers/serviceObjectives/read | Retrieve list of service level objectives (also known as performance tiers) available on a given server |
+> | Microsoft.Sql/servers/sqlVulnerabilityAssessments/write | Change SQL Vulnerability Assessment for a given server |
+> | Microsoft.Sql/servers/sqlVulnerabilityAssessments/delete | Remove SQL Vulnerability Assessment for a given server |
+> | Microsoft.Sql/servers/sqlVulnerabilityAssessments/read | Retrieve SQL Vulnerability Assessment policies on a given server |
+> | Microsoft.Sql/servers/sqlVulnerabilityAssessments/initiateScan/action | Execute vulnerability assessment database scan. |
+> | Microsoft.Sql/servers/sqlVulnerabilityAssessments/baselines/write | Change the sql vulnerability assessment baseline set for a given system database |
+> | Microsoft.Sql/servers/sqlVulnerabilityAssessments/baselines/read | Retrieve the Sql Vulnerability Assessment baseline set on a system database |
+> | Microsoft.Sql/servers/sqlVulnerabilityAssessments/baselines/rules/read | Get the vulnerability assessment rule baseline for a given database |
+> | Microsoft.Sql/servers/sqlVulnerabilityAssessments/baselines/rules/delete | Remove the sql vulnerability assessment rule baseline for a given database |
+> | Microsoft.Sql/servers/sqlVulnerabilityAssessments/baselines/rules/write | Change the sql vulnerability assessment rule baseline for a given database |
+> | Microsoft.Sql/servers/sqlVulnerabilityAssessments/scans/read | List SQL vulnerability assessment scan records by database. |
+> | Microsoft.Sql/servers/sqlVulnerabilityAssessments/scans/scanResults/read | Retrieve the scan results of the database vulnerability assessment scan |
> | Microsoft.Sql/servers/syncAgents/read | Return the list of sync agents or gets the properties for the specified sync agent. | > | Microsoft.Sql/servers/syncAgents/write | Creates a sync agent with the specified parameters or update the properties for the specified sync agent. | > | Microsoft.Sql/servers/syncAgents/delete | Deletes an existing sync agent. | > | Microsoft.Sql/servers/syncAgents/generateKey/action | Generate sync agent registration key | > | Microsoft.Sql/servers/syncAgents/linkedDatabases/read | Return the list of sync agent linked databases |
-> | Microsoft.Sql/servers/usages/read | Return server DTU quota and current DTU consumption by all databases within the server |
+> | Microsoft.Sql/servers/usages/read | Gets the Azure SQL Database Server usages information |
> | Microsoft.Sql/servers/virtualNetworkRules/read | Return the list of virtual network rules or gets the properties for the specified virtual network rule. | > | Microsoft.Sql/servers/virtualNetworkRules/write | Creates a virtual network rule with the specified parameters or update the properties or tags for the specified virtual network rule. | > | Microsoft.Sql/servers/virtualNetworkRules/delete | Deletes an existing Virtual Network Rule |
Azure service: [Event Hubs](../event-hubs/index.yml)
> | Microsoft.EventHub/namespaces/authorizationRules/action | Updates Namespace Authorization Rule. This API is deprecated. Please use a PUT call to update the Namespace Authorization Rule instead.. This operation is not supported on API version 2017-04-01. | > | Microsoft.EventHub/namespaces/removeAcsNamepsace/action | Remove ACS namespace | > | Microsoft.EventHub/namespaces/privateEndpointConnectionsApproval/action | Approve Private Endpoint Connection |
+> | Microsoft.EventHub/namespaces/joinPerimeter/action | Action to Join the Network Security Perimeter. This action is used to perform linked access by NSP RP. |
> | Microsoft.EventHub/namespaces/authorizationRules/read | Get the list of Namespaces Authorization Rules description. | > | Microsoft.EventHub/namespaces/authorizationRules/write | Create a Namespace level Authorization Rules and update its properties. The Authorization Rules Access Rights, the Primary and Secondary Keys can be updated. | > | Microsoft.EventHub/namespaces/authorizationRules/delete | Delete Namespace Authorization Rule. The Default Namespace Authorization Rule cannot be deleted. |
Azure service: [Cognitive Services](../cognitive-services/index.yml)
> | Microsoft.CognitiveServices/accounts/read | Reads API accounts. | > | Microsoft.CognitiveServices/accounts/write | Writes API Accounts. | > | Microsoft.CognitiveServices/accounts/delete | Deletes API accounts |
+> | Microsoft.CognitiveServices/accounts/joinPerimeter/action | Allow to join CognitiveServices account to an given perimeter. |
> | Microsoft.CognitiveServices/accounts/listKeys/action | List keys | > | Microsoft.CognitiveServices/accounts/regenerateKey/action | Regenerate Key | > | Microsoft.CognitiveServices/accounts/commitmentplans/read | Reads commitment plans. |
Azure service: [IoT security](../iot-fundamentals/iot-security-architecture.md)
> | Microsoft.IoTSecurity/defenderSettings/delete | Deletes IoT Defender Settings | > | Microsoft.IoTSecurity/defenderSettings/packageDownloads/action | Gets downloadable IoT Defender packages information | > | Microsoft.IoTSecurity/defenderSettings/downloadManagerActivation/action | Download manager activation file |
-> | Microsoft.IoTSecurity/deviceGroups/read | Gets device group |
> | Microsoft.IoTSecurity/locations/read | Gets location |
+> | Microsoft.IoTSecurity/locations/deviceGroups/read | Gets device group |
> | Microsoft.IoTSecurity/locations/deviceGroups/alerts/read | Gets IoT Alerts | > | Microsoft.IoTSecurity/locations/deviceGroups/alerts/write | Updates IoT Alert properties | > | Microsoft.IoTSecurity/locations/deviceGroups/alerts/learnAlert/action | Learn and close the alert |
Azure service: [IoT security](../iot-fundamentals/iot-security-architecture.md)
> | Microsoft.IoTSecurity/sensors/downloadActivation/action | Downloads activation file for IoT Sensors | > | Microsoft.IoTSecurity/sensors/triggerTiPackageUpdate/action | Triggers threat intelligence package update | > | Microsoft.IoTSecurity/sensors/downloadResetPassword/action | Downloads reset password file for IoT Sensors |
-> | Microsoft.IoTSecurity/sites/read | Gets IoT site |
-> | Microsoft.IoTSecurity/sites/write | Creates IoT site |
-> | Microsoft.IoTSecurity/sites/delete | Deletes IoT site |
### Microsoft.NotificationHubs
Azure service: [API Management](../api-management/index.yml)
> | Microsoft.ApiManagement/service/applynetworkconfigurationupdates/action | Updates the Microsoft.ApiManagement resources running in Virtual Network to pick updated Network Settings. | > | Microsoft.ApiManagement/service/users/action | Register a new user | > | Microsoft.ApiManagement/service/notifications/action | Sends notification to a specified user |
-> | Microsoft.ApiManagement/service/gateways/action | Retrieves gateway configuration. or Updates gateway heartbeat. |
> | Microsoft.ApiManagement/service/apis/read | Lists all APIs of the API Management service instance. or Gets the details of the API specified by its identifier. | > | Microsoft.ApiManagement/service/apis/write | Creates new or updates existing specified API of the API Management service instance. or Updates the specified API of the API Management service instance. | > | Microsoft.ApiManagement/service/apis/delete | Deletes the specified API of the API Management service instance. |
Azure service: [API Management](../api-management/index.yml)
> | Microsoft.ApiManagement/service/gateways/certificateAuthorities/read | Get Gateway CAs list. or Get assigned Certificate Authority details. | > | Microsoft.ApiManagement/service/gateways/certificateAuthorities/write | Adds an API to the specified Gateway. | > | Microsoft.ApiManagement/service/gateways/certificateAuthorities/delete | Unassign Certificate Authority from Gateway. |
-> | Microsoft.ApiManagement/service/gateways/hostnameConfigurations/read | Lists the collection of hostname configurations for the specified gateway. |
+> | Microsoft.ApiManagement/service/gateways/hostnameConfigurations/read | Lists the collection of hostname configurations for the specified gateway. or Get details of a hostname configuration |
+> | Microsoft.ApiManagement/service/gateways/hostnameConfigurations/write | Request subscription for a new product |
+> | Microsoft.ApiManagement/service/gateways/hostnameConfigurations/delete | Deletes the specified hostname configuration. |
> | Microsoft.ApiManagement/service/groups/read | Lists a collection of groups defined within a service instance. or Gets the details of the group specified by its identifier. | > | Microsoft.ApiManagement/service/groups/write | Creates or Updates a group. or Updates the details of the group specified by its identifier. | > | Microsoft.ApiManagement/service/groups/delete | Deletes specific group of the API Management service instance. |
Azure service: [API Management](../api-management/index.yml)
> | Microsoft.ApiManagement/service/workspaces/subscriptions/regeneratePrimaryKey/action | Regenerates primary key of existing subscription of the API Management service instance. | > | Microsoft.ApiManagement/service/workspaces/subscriptions/regenerateSecondaryKey/action | Regenerates secondary key of existing subscription of the API Management service instance. | > | Microsoft.ApiManagement/service/workspaces/subscriptions/listSecrets/action | Gets the specified Subscription keys. |
+> | **DataAction** | **Description** |
+> | Microsoft.ApiManagement/service/gateways/getConfiguration/action | Fetches configuration for specified self-hosted gateway |
### Microsoft.AppConfiguration
Azure service: core
> | Microsoft.AppConfiguration/configurationStores/providers/Microsoft.Insights/diagnosticSettings/write | Write/Overwrite Diagnostic Settings for Microsoft App Configuration. | > | Microsoft.AppConfiguration/configurationStores/providers/Microsoft.Insights/logDefinitions/read | Retrieve all log definitions for Microsoft App Configuration. | > | Microsoft.AppConfiguration/configurationStores/providers/Microsoft.Insights/metricDefinitions/read | Retrieve all metric definitions for Microsoft App Configuration. |
+> | Microsoft.AppConfiguration/configurationStores/replicas/read | Gets the properties of the specified replica or lists all the replicas under the specified configuration store. |
+> | Microsoft.AppConfiguration/configurationStores/replicas/write | Creates a replica with the specified parameters. |
+> | Microsoft.AppConfiguration/configurationStores/replicas/delete | Deletes a replica. |
> | Microsoft.AppConfiguration/locations/checkNameAvailability/read | Check whether the resource name is available for use. | > | Microsoft.AppConfiguration/locations/deletedConfigurationStores/read | Gets the properties of the specified deleted configuration store or lists all the deleted configuration stores under the specified subscription. | > | Microsoft.AppConfiguration/locations/deletedConfigurationStores/purge/action | Purge the specified deleted configuration store. |
Azure service: [Event Grid](../event-grid/index.yml)
> | Microsoft.EventGrid/operationResults/read | Read the result of an operation | > | Microsoft.EventGrid/operations/read | List EventGrid operations. | > | Microsoft.EventGrid/operationsStatus/read | Read the status of an operation |
+> | Microsoft.EventGrid/partnerConfigurations/read | Read a partner configuration |
+> | Microsoft.EventGrid/partnerConfigurations/write | Create or update a partner configuration |
+> | Microsoft.EventGrid/partnerConfigurations/delete | Delete a partner configuration |
+> | Microsoft.EventGrid/partnerConfigurations/authorizePartner/action | Authorize a partner in the partner configuration |
+> | Microsoft.EventGrid/partnerConfigurations/unauthorizePartner/action | Unauthorize a partner in the partner configuration |
+> | Microsoft.EventGrid/partnerDestinations/read | Read a partner destination |
+> | Microsoft.EventGrid/partnerDestinations/write | Create or update a partner destination |
+> | Microsoft.EventGrid/partnerDestinations/delete | Delete a partner destination |
+> | Microsoft.EventGrid/partnerDestinations/activate/action | Activate a partner destination |
+> | Microsoft.EventGrid/partnerDestinations/getPartnerDestinationChannelInfo/action | Get channel details of activated partner destination |
+> | Microsoft.EventGrid/partnerNamespaces/write | Create or update a partner namespace |
+> | Microsoft.EventGrid/partnerNamespaces/read | Read a partner namespace |
+> | Microsoft.EventGrid/partnerNamespaces/delete | Delete a partner namespace |
+> | Microsoft.EventGrid/partnerNamespaces/listKeys/action | List keys for a partner namespace |
+> | Microsoft.EventGrid/partnerNamespaces/regenerateKey/action | Regenerate key for a partner namespace |
+> | Microsoft.EventGrid/partnerNamespaces/PrivateEndpointConnectionsApproval/action | Approve PrivateEndpointConnections for partner namespaces |
+> | Microsoft.EventGrid/partnerNamespaces/channels/read | Read a channel |
+> | Microsoft.EventGrid/partnerNamespaces/channels/write | Create or update a channel |
+> | Microsoft.EventGrid/partnerNamespaces/channels/delete | Delete a channel |
+> | Microsoft.EventGrid/partnerNamespaces/channels/channelReadinessStateChange/action | Change channel readiness state |
+> | Microsoft.EventGrid/partnerNamespaces/channels/getFullUrl/action | Get full url for the partner destination channel |
+> | Microsoft.EventGrid/partnerNamespaces/eventChannels/read | Read an event channel |
+> | Microsoft.EventGrid/partnerNamespaces/eventChannels/write | Create or update an event channel |
+> | Microsoft.EventGrid/partnerNamespaces/eventChannels/delete | Delete an event channel |
+> | Microsoft.EventGrid/partnerNamespaces/eventChannels/channelReadinessStateChange/action | Change event channel readiness state |
+> | Microsoft.EventGrid/partnerNamespaces/privateEndpointConnectionProxies/validate/action | Validate PrivateEndpointConnectionProxies for partner namespaces |
+> | Microsoft.EventGrid/partnerNamespaces/privateEndpointConnectionProxies/read | Read PrivateEndpointConnectionProxies for partner namespaces |
+> | Microsoft.EventGrid/partnerNamespaces/privateEndpointConnectionProxies/write | Write PrivateEndpointConnectionProxies for partner namespaces |
+> | Microsoft.EventGrid/partnerNamespaces/privateEndpointConnectionProxies/delete | Delete PrivateEndpointConnectionProxies for partner namespaces |
+> | Microsoft.EventGrid/partnerNamespaces/privateEndpointConnections/read | Read PrivateEndpointConnections for partner namespaces |
+> | Microsoft.EventGrid/partnerNamespaces/privateEndpointConnections/write | Write PrivateEndpointConnections for partner namespaces |
+> | Microsoft.EventGrid/partnerNamespaces/privateEndpointConnections/delete | Delete PrivateEndpointConnections for partner namespaces |
+> | Microsoft.EventGrid/partnerNamespaces/privateLinkResources/read | Read PrivateLinkResources for partner namespaces |
> | Microsoft.EventGrid/partnerNamespaces/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for partner namespaces | > | Microsoft.EventGrid/partnerNamespaces/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for partner namespaces | > | Microsoft.EventGrid/partnerNamespaces/providers/Microsoft.Insights/logDefinitions/read | Allows access to diagnostic logs | > | Microsoft.EventGrid/partnerNamespaces/providers/Microsoft.Insights/metricDefinitions/read | Gets the available metrics for partner namespaces |
+> | Microsoft.EventGrid/partnerRegistrations/write | Create or update a partner registration |
+> | Microsoft.EventGrid/partnerRegistrations/read | Read a partner registration |
+> | Microsoft.EventGrid/partnerRegistrations/delete | Delete a partner registration |
+> | Microsoft.EventGrid/partnerTopics/read | Read a partner topic |
+> | Microsoft.EventGrid/partnerTopics/write | Create or update a partner topic |
+> | Microsoft.EventGrid/partnerTopics/delete | Delete a partner topic |
+> | Microsoft.EventGrid/partnerTopics/activate/action | Activate a partner topic |
+> | Microsoft.EventGrid/partnerTopics/deactivate/action | Deactivate a partner topic |
+> | Microsoft.EventGrid/partnerTopics/eventSubscriptions/write | Create or update a PartnerTopic eventSubscription |
+> | Microsoft.EventGrid/partnerTopics/eventSubscriptions/read | Read a partner topic event subscription |
+> | Microsoft.EventGrid/partnerTopics/eventSubscriptions/delete | Delete a partner topic event subscription |
+> | Microsoft.EventGrid/partnerTopics/eventSubscriptions/getFullUrl/action | Get full url for the partner topic event subscription |
> | Microsoft.EventGrid/partnerTopics/eventSubscriptions/getDeliveryAttributes/action | Get PartnerTopic EventSubscription Delivery Attributes | > | Microsoft.EventGrid/partnerTopics/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for partner topics | > | Microsoft.EventGrid/partnerTopics/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for partner topics |
Azure service: [Event Grid](../event-grid/index.yml)
> | Microsoft.EventGrid/topictypes/read | Read a topictype | > | Microsoft.EventGrid/topictypes/eventSubscriptions/read | List global event subscriptions by topic type | > | Microsoft.EventGrid/topictypes/eventtypes/read | Read eventtypes supported by a topictype |
+> | Microsoft.EventGrid/verifiedPartners/read | Read a verified partner |
> | **DataAction** | **Description** | > | Microsoft.EventGrid/events/send/action | Send events to topics |
Azure service: [Azure Active Directory Domain Services](../active-directory-doma
> | Microsoft.AAD/domainServices/providers/Microsoft.Insights/diagnosticSettings/read | Gets the diagnostic setting for Domain Service | > | Microsoft.AAD/domainServices/providers/Microsoft.Insights/diagnosticSettings/write | Creates or updates the diagnostic setting for the Domain Service resource | > | Microsoft.AAD/domainServices/providers/Microsoft.Insights/logDefinitions/read | Gets the available logs for Domain Service |
+> | Microsoft.AAD/domainServices/providers/Microsoft.Insights/metricDefinitions/read | Gets metrics for Domain Service |
> | Microsoft.AAD/locations/operationresults/read | | > | Microsoft.AAD/Operations/read | |
Azure service: [Microsoft Sentinel](../sentinel/index.yml)
> | Microsoft.SecurityInsights/threatintelligence/bulkTag/action | Bulk Tags Threat Intelligence | > | Microsoft.SecurityInsights/threatintelligence/createIndicator/action | Create Threat Intelligence Indicator | > | Microsoft.SecurityInsights/threatintelligence/queryIndicators/action | Query Threat Intelligence Indicators |
+> | Microsoft.SecurityInsights/threatintelligence/bulkactions/read | Reads TI Bulk Action objects |
+> | Microsoft.SecurityInsights/threatintelligence/bulkactions/write | Creates or updates a TI Bulk Action |
+> | Microsoft.SecurityInsights/threatintelligence/bulkactions/delete | Deletes a TI Bulk Action |
> | Microsoft.SecurityInsights/threatintelligence/indicators/write | Updates Threat Intelligence Indicators | > | Microsoft.SecurityInsights/threatintelligence/indicators/delete | Deletes Threat Intelligence Indicators | > | Microsoft.SecurityInsights/threatintelligence/indicators/query/action | Query Threat Intelligence Indicators |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Action | Description | > | | | > | Microsoft.RecoveryServices/register/action | Registers subscription for given Resource Provider |
-> | Microsoft.RecoveryServices/Locations/backupCrossRegionRestore/action | Trigger Cross region restore. |
-> | Microsoft.RecoveryServices/Locations/backupCrrJob/action | Get Cross Region Restore Job Details in the secondary region for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Locations/backupCrrJobs/action | List Cross Region Restore Jobs in the secondary region for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Locations/backupPreValidateProtection/action | |
-> | Microsoft.RecoveryServices/Locations/backupStatus/action | Check Backup Status for Recovery Services Vaults |
-> | Microsoft.RecoveryServices/Locations/backupValidateFeatures/action | Validate Features |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupCrossRegionRestore/action | Trigger Cross region restore. |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupCrrJob/action | Get Cross Region Restore Job Details in the secondary region for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupCrrJobs/action | List Cross Region Restore Jobs in the secondary region for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupPreValidateProtection/action | |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupStatus/action | Check Backup Status for Recovery Services Vaults |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupValidateFeatures/action | Validate Features |
> | Microsoft.RecoveryServices/locations/allocateStamp/action | AllocateStamp is internal operation used by service | > | Microsoft.RecoveryServices/locations/checkNameAvailability/action | Check Resource Name Availability is an API to check if resource name is available | > | Microsoft.RecoveryServices/locations/allocatedStamp/read | GetAllocatedStamp is internal operation used by service |
-> | Microsoft.RecoveryServices/Locations/backupAadProperties/read | Get AAD Properties for authentication in the third region for Cross Region Restore. |
-> | Microsoft.RecoveryServices/Locations/backupCrrOperationResults/read | Returns CRR Operation Result for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Locations/backupCrrOperationsStatus/read | Returns CRR Operation Status for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Locations/backupProtectedItem/write | Create a backup Protected Item |
-> | Microsoft.RecoveryServices/Locations/backupProtectedItems/read | Returns the list of all Protected Items. |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupAadProperties/read | Get AAD Properties for authentication in the third region for Cross Region Restore. |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupCrrOperationResults/read | Returns CRR Operation Result for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupCrrOperationsStatus/read | Returns CRR Operation Status for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupProtectedItem/write | Create a backup Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Locations/backupProtectedItems/read | Returns the list of all Protected Items. |
> | Microsoft.RecoveryServices/locations/operationStatus/read | Gets Operation Status for a given Operation | > | Microsoft.RecoveryServices/operations/read | Operation returns the list of Operations for a Resource Provider |
-> | Microsoft.RecoveryServices/Vaults/backupJobsExport/action | Export Jobs |
-> | Microsoft.RecoveryServices/Vaults/backupSecurityPIN/action | Returns Security PIN Information for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupTriggerValidateOperation/action | Validate Operation on Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupValidateOperation/action | Validate Operation on Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupJobsExport/action | Export Jobs |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupSecurityPIN/action | Returns Security PIN Information for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupTriggerValidateOperation/action | Validate Operation on Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupValidateOperation/action | Validate Operation on Protected Item |
> | Microsoft.RecoveryServices/Vaults/write | Create Vault operation creates an Azure resource of type 'vault' | > | Microsoft.RecoveryServices/Vaults/read | The Get Vault operation gets an object representing the Azure resource of type 'vault' | > | Microsoft.RecoveryServices/Vaults/delete | The Delete Vault operation deletes the specified Azure resource of type 'vault' |
-> | Microsoft.RecoveryServices/Vaults/backupconfig/read | Returns Configuration for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupconfig/write | Updates Configuration for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupEncryptionConfigs/read | Gets Backup Resource Encryption Configuration. |
-> | Microsoft.RecoveryServices/Vaults/backupEncryptionConfigs/write | Updates Backup Resource Encryption Configuration |
-> | Microsoft.RecoveryServices/Vaults/backupEngines/read | Returns all the backup management servers registered with vault. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/refreshContainers/action | Refreshes the container list |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/delete | Delete a backup Protection Intent |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/read | Get a backup Protection Intent |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/backupProtectionIntent/write | Create a backup Protection Intent |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/operationResults/read | Returns status of the operation |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/operationsStatus/read | Returns status of the operation |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectableContainers/read | Get all protectable containers |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/delete | Deletes the registered Container |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/inquire/action | Do inquiry for workloads within a container |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/read | Returns all registered containers |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/write | Creates a registered container |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/items/read | Get all items in a container |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/operationResults/read | Gets result of Operation performed on Protection Container. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/operationsStatus/read | Gets status of Operation performed on Protection Container. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/backup/action | Performs Backup for Protected Item. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/delete | Deletes Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/read | Returns object details of the Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPointsRecommendedForMove/action | Get Recovery points recommended for move to another tier |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/write | Create a backup Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/operationResults/read | Gets Result of Operation Performed on Protected Items. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/operationsStatus/read | Returns the status of Operation performed on Protected Items. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/accessToken/action | Get AccessToken for Cross Region Restore. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/move/action | Move Recovery point to another tier |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/provisionInstantItemRecovery/action | Provision Instant Item Recovery for Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/read | Get Recovery Points for Protected Items. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/restore/action | Restore Recovery Points for Protected Items. |
-> | Microsoft.RecoveryServices/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/revokeInstantItemRecovery/action | Revoke Instant Item Recovery for Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupJobs/cancel/action | Cancel the Job |
-> | Microsoft.RecoveryServices/Vaults/backupJobs/read | Returns all Job Objects |
-> | Microsoft.RecoveryServices/Vaults/backupJobs/operationResults/read | Returns the Result of Job Operation. |
-> | Microsoft.RecoveryServices/Vaults/backupJobs/operationsStatus/read | Returns the status of Job Operation. |
-> | Microsoft.RecoveryServices/Vaults/backupOperationResults/read | Returns Backup Operation Result for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupOperations/read | Returns Backup Operation Status for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupPolicies/delete | Delete a Protection Policy |
-> | Microsoft.RecoveryServices/Vaults/backupPolicies/read | Returns all Protection Policies |
-> | Microsoft.RecoveryServices/Vaults/backupPolicies/write | Creates Protection Policy |
-> | Microsoft.RecoveryServices/Vaults/backupPolicies/operationResults/read | Get Results of Policy Operation. |
-> | Microsoft.RecoveryServices/Vaults/backupPolicies/operations/read | Get Status of Policy Operation. |
-> | Microsoft.RecoveryServices/Vaults/backupProtectableItems/read | Returns list of all Protectable Items. |
-> | Microsoft.RecoveryServices/Vaults/backupProtectedItems/read | Returns the list of all Protected Items. |
-> | Microsoft.RecoveryServices/Vaults/backupProtectionContainers/read | Returns all containers belonging to the subscription |
-> | Microsoft.RecoveryServices/Vaults/backupProtectionIntents/read | List all backup Protection Intents |
-> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/delete | The Delete ResourceGuard proxy operation deletes the specified Azure resource of type 'ResourceGuard proxy' |
-> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/read | Get the list of ResourceGuard proxies for a resource |
-> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/read | Get ResourceGuard proxy operation gets an object representing the Azure resource of type 'ResourceGuard proxy' |
-> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/unlockDelete/action | Unlock delete ResourceGuard proxy operation unlocks the next delete critical operation |
-> | Microsoft.RecoveryServices/Vaults/backupResourceGuardProxies/write | Create ResourceGuard proxy operation creates an Azure resource of type 'ResourceGuard Proxy' |
-> | Microsoft.RecoveryServices/Vaults/backupstorageconfig/read | Returns Storage Configuration for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupstorageconfig/write | Updates Storage Configuration for Recovery Services Vault. |
-> | Microsoft.RecoveryServices/Vaults/backupUsageSummaries/read | Returns summaries for Protected Items and Protected Servers for a Recovery Services . |
-> | Microsoft.RecoveryServices/Vaults/backupValidateOperationResults/read | Validate Operation on Protected Item |
-> | Microsoft.RecoveryServices/Vaults/backupValidateOperationsStatuses/read | Validate Operation on Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupconfig/read | Returns Configuration for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupconfig/write | Updates Configuration for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupEncryptionConfigs/read | Gets Backup Resource Encryption Configuration. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupEncryptionConfigs/write | Updates Backup Resource Encryption Configuration |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupEngines/read | Returns all the backup management servers registered with vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/refreshContainers/action | Refreshes the container list |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/backupProtectionIntent/delete | Delete a backup Protection Intent |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/backupProtectionIntent/read | Get a backup Protection Intent |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/backupProtectionIntent/write | Create a backup Protection Intent |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/operationResults/read | Returns status of the operation |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/operationsStatus/read | Returns status of the operation |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectableContainers/read | Get all protectable containers |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/delete | Deletes the registered Container |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/inquire/action | Do inquiry for workloads within a container |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/read | Returns all registered containers |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/write | Creates a registered container |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/items/read | Get all items in a container |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/operationResults/read | Gets result of Operation performed on Protection Container. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/operationsStatus/read | Gets status of Operation performed on Protection Container. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/backup/action | Performs Backup for Protected Item. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/delete | Deletes Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/read | Returns object details of the Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPointsRecommendedForMove/action | Get Recovery points recommended for move to another tier |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/write | Create a backup Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/operationResults/read | Gets Result of Operation Performed on Protected Items. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/operationsStatus/read | Returns the status of Operation performed on Protected Items. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/accessToken/action | Get AccessToken for Cross Region Restore. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/move/action | Move Recovery point to another tier |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/provisionInstantItemRecovery/action | Provision Instant Item Recovery for Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/read | Get Recovery Points for Protected Items. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/restore/action | Restore Recovery Points for Protected Items. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupFabrics/protectionContainers/protectedItems/recoveryPoints/revokeInstantItemRecovery/action | Revoke Instant Item Recovery for Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupJobs/cancel/action | Cancel the Job |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupJobs/read | Returns all Job Objects |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupJobs/operationResults/read | Returns the Result of Job Operation. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupJobs/operationsStatus/read | Returns the status of Job Operation. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupOperationResults/read | Returns Backup Operation Result for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupOperations/read | Returns Backup Operation Status for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupPolicies/delete | Delete a Protection Policy |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupPolicies/read | Returns all Protection Policies |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupPolicies/write | Creates Protection Policy |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupPolicies/operationResults/read | Get Results of Policy Operation. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupPolicies/operations/read | Get Status of Policy Operation. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupProtectableItems/read | Returns list of all Protectable Items. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupProtectedItems/read | Returns the list of all Protected Items. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupProtectionContainers/read | Returns all containers belonging to the subscription |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupProtectionIntents/read | List all backup Protection Intents |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupResourceGuardProxies/delete | The Delete ResourceGuard proxy operation deletes the specified Azure resource of type 'ResourceGuard proxy' |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupResourceGuardProxies/read | Get the list of ResourceGuard proxies for a resource |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupResourceGuardProxies/read | Get ResourceGuard proxy operation gets an object representing the Azure resource of type 'ResourceGuard proxy' |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupResourceGuardProxies/unlockDelete/action | Unlock delete ResourceGuard proxy operation unlocks the next delete critical operation |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupResourceGuardProxies/write | Create ResourceGuard proxy operation creates an Azure resource of type 'ResourceGuard Proxy' |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupstorageconfig/read | Returns Storage Configuration for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupstorageconfig/write | Updates Storage Configuration for Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupUsageSummaries/read | Returns summaries for Protected Items and Protected Servers for a Recovery Services . |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupValidateOperationResults/read | Validate Operation on Protected Item |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/backupValidateOperationsStatuses/read | Validate Operation on Protected Item |
> | Microsoft.RecoveryServices/Vaults/certificates/write | The Update Resource Certificate operation updates the resource/vault credential certificate. | > | Microsoft.RecoveryServices/Vaults/extendedInformation/read | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? | > | Microsoft.RecoveryServices/Vaults/extendedInformation/write | The Get Extended Info operation gets an object's Extended Info representing the Azure resource of type ?vault? |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/Vaults/monitoringAlerts/write | Resolves the alert. | > | Microsoft.RecoveryServices/Vaults/monitoringConfigurations/read | Gets the Recovery services vault notification configuration. | > | Microsoft.RecoveryServices/Vaults/monitoringConfigurations/write | Configures e-mail notifications to Recovery services vault. |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/delete | Wait for a few minutes and then try the operation again. If the issue persists, please contact Microsoft support. |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/read | Get all protectable containers |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/validate/action | Get all protectable containers |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/write | Get all protectable containers |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnectionProxies/operationsStatus/read | Get all protectable containers |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/delete | Delete Private Endpoint requests. This call is made by Backup Admin. |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/write | Approve or Reject Private Endpoint requests. This call is made by Backup Admin. |
-> | Microsoft.RecoveryServices/Vaults/privateEndpointConnections/operationsStatus/read | Returns the operation status for a private endpoint connection. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnectionProxies/delete | Wait for a few minutes and then try the operation again. If the issue persists, please contact Microsoft support. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnectionProxies/read | Get all protectable containers |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnectionProxies/validate/action | Get all protectable containers |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnectionProxies/write | Get all protectable containers |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnectionProxies/operationsStatus/read | Get all protectable containers |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnections/delete | Delete Private Endpoint requests. This call is made by Backup Admin. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnections/write | Approve or Reject Private Endpoint requests. This call is made by Backup Admin. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/privateEndpointConnections/operationsStatus/read | Returns the operation status for a private endpoint connection. |
> | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/diagnosticSettings/read | Azure Backup Diagnostics | > | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/diagnosticSettings/write | Azure Backup Diagnostics | > | Microsoft.RecoveryServices/Vaults/providers/Microsoft.Insights/logDefinitions/read | Azure Backup Logs |
Azure service: [Site Recovery](../site-recovery/index.yml)
> | Microsoft.RecoveryServices/vaults/replicationVaultSettings/read | Read any | > | Microsoft.RecoveryServices/vaults/replicationVaultSettings/write | Create or Update any | > | Microsoft.RecoveryServices/vaults/replicationvCenters/read | Read any vCenters |
-> | Microsoft.RecoveryServices/Vaults/usages/read | Returns usage details for a Recovery Services Vault. |
+> | MICROSOFT.RECOVERYSERVICES/Vaults/usages/read | Returns usage details for a Recovery Services Vault. |
> | Microsoft.RecoveryServices/vaults/usages/read | Read any Vault Usages | > | Microsoft.RecoveryServices/Vaults/vaultTokens/read | The Vault Token operation can be used to get Vault Token for vault level backend operations. |
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshooting.md
na Previously updated : 02/18/2022 Last updated : 06/21/2022
Azure supports up to **500** role assignments per management group. This limit i
- If you are unable to delete a custom role and get the error message "There are existing role assignments referencing role (code: RoleDefinitionHasAssignments)", then there are role assignments still using the custom role. Remove those role assignments and try to delete the custom role again. - If you get the error message "Role definition limit exceeded. No more role definitions can be created (code: RoleDefinitionLimitExceeded)" when you try to create a new custom role, delete any custom roles that aren't being used. Azure supports up to **5000** custom roles in a directory. (For Azure Germany and Azure China 21Vianet, the limit is 2000 custom roles.) - If you get an error similar to "The client has permission to perform action 'Microsoft.Authorization/roleDefinitions/write' on scope '/subscriptions/{subscriptionid}', however the linked subscription was not found" when you try to update a custom role, check whether one or more [assignable scopes](role-definitions.md#assignablescopes) have been deleted in the directory. If the scope was deleted, then create a support ticket as there is no self-service solution available at this time.
+- When you attempt to create or update a custom role, you get an error similar to "The client '&lt;clientName&gt;' with object id '&lt;objectId&gt;' has permission to perform action 'Microsoft.Authorization/roleDefinitions/write' on scope '/subscriptions/&lt;subscriptionId&gt;'; however, it does not have permission to perform action 'Microsoft.Authorization/roleDefinitions/write' on the linked scope(s)'/subscriptions/&lt;subscriptionId1&gt;,/subscriptions/&lt;subscriptionId2&gt;,/subscriptions/&lt;subscriptionId3&gt;' or the linked scope(s)are invalid". This error usually indicates that you do not have permissions to one or more of the [assignable scopes](role-definitions.md#assignablescopes) in the custom role. You can try the following:
+ - Review [Who can create, delete, update, or view a custom role](custom-roles.md#who-can-create-delete-update-or-view-a-custom-role) and check that you have permissions to create or update the custom role for all assignable scopes.
+ - If you don't have permissions, ask your administrator to assign you a role that has the `Microsoft.Authorization/roleDefinitions/write` action, such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator), at the scope of the assignable scope.
+ - Check that all the assignable scopes in the custom role are valid. If not, remove any invalid assignable scopes.
## Custom roles and management groups
search Cognitive Search Common Errors Warnings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-common-errors-warnings.md
Previously updated : 05/24/2022 Last updated : 06/23/2022 # Troubleshooting common indexer errors and warnings in Azure Cognitive Search
If you want indexers to ignore these errors (and skip over "failed documents"),
The error information in this article can help you resolve errors, allowing indexing to continue.
-Warnings do not stop indexing, but they do indicate conditions that could result in unexpected outcomes. Whether you take action or not depends on the data and your scenario.
+Warnings don't stop indexing, but they do indicate conditions that could result in unexpected outcomes. Whether you take action or not depends on the data and your scenario.
Beginning with API version `2019-05-06`, item-level Indexer errors and warnings are structured to provide increased clarity around causes and next steps. They contain the following properties:
Indexer was unable to read the document from the data source. This can happen du
<a name="could-not-extract-document-content"></a> ## `Error: Could not extract content or metadata from your document`+ Indexer with a Blob data source was unable to extract the content or metadata from the document (for example, a PDF file). This can happen due to: | Reason | Details/Example | Resolution |
Indexer with a Blob data source was unable to extract the content or metadata fr
<a name="could-not-parse-document"></a> ## `Error: Could not parse document`+ Indexer read the document from the data source, but there was an issue converting the document content into the specified field mapping schema. This can happen due to: | Reason | Details/Example | Resolution |
Indexer read the document from the data source, but there was an issue convertin
| Could not apply field mapping to a field | `Could not apply mapping function 'functionName' to field 'fieldName'. Array cannot be null. Parameter name: bytes` | Double check the [field mappings](search-indexer-field-mappings.md) defined on the indexer, and compare with the data of the specified field of the failed document. It may be necessary to modify the field mappings or the document data. | | Could not read field value | `Could not read the value of column 'fieldName' at index 'fieldIndex'. A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)` | These errors are typically due to unexpected connectivity issues with the data source's underlying service. Try running the document through your indexer again later. |
+<a name="Could not map output field '`xyz`' to search index due to deserialization problem while applying mapping function '`abc`'"></a>
## `Error: Could not map output field 'xyz' to search index due to deserialization problem while applying mapping function 'abc'`
-The output mapping might have failed because the output data is in the wrong format for the mapping function you are using. For example, applying Base64Encode mapping function on binary data would generate this error. To resolve the issue, either rerun indexer without specifying mapping function or ensure that the mapping function is compatible with the output field data type. See [Output field mapping](cognitive-search-output-field-mapping.md) for details.
+The output mapping might have failed because the output data is in the wrong format for the mapping function you're using. For example, applying Base64Encode mapping function on binary data would generate this error. To resolve the issue, either rerun indexer without specifying mapping function or ensure that the mapping function is compatible with the output field data type. See [Output field mapping](cognitive-search-output-field-mapping.md) for details.
+
+<a name="could-not-execute-skill"></a>
## `Error: Could not execute skill`+ The indexer was not able to run a skill in the skillset. | Reason | Details/Example | Resolution | | | | | | Transient connectivity issues | A transient error occurred. Please try again later. | Occasionally there are unexpected connectivity issues. Try running the document through your indexer again later. |
-| Potential product bug | An unexpected error occurred. | This indicates an unknown class of failure and may mean there is a product bug. File a [support ticket](https://portal.azure.com/#create/Microsoft.Support) to get help. |
+| Potential product bug | An unexpected error occurred. | This indicates an unknown class of failure and may indicate a product bug. File a [support ticket](https://portal.azure.com/#create/Microsoft.Support) to get help. |
| A skill has encountered an error during execution | (From Merge Skill) One or more offset values were invalid and could not be parsed. Items were inserted at the end of the text | Use the information in the error message to fix the issue. This kind of failure will require action to resolve. | -
+<a name="could-not-execute-skill-because-the-web-api-request-failed"></a>
## `Error: Could not execute skill because the Web API request failed`
-The skill execution failed because the call to the Web API failed. Typically, this class of failure occurs when custom skills are used, in which case you will need to debug your custom code to resolve the issue. If instead the failure is from a built-in skill, refer to the error message for help in fixing the issue.
+
+The skill execution failed because the call to the Web API failed. Typically, this class of failure occurs when custom skills are used, in which case you'll need to debug your custom code to resolve the issue. If instead the failure is from a built-in skill, refer to the error message for help in fixing the issue.
While debugging this issue, be sure to pay attention to any [skill input warnings](#warning-skill-input-was-invalid) for this skill. Your Web API endpoint may be failing because the indexer is passing it unexpected input.
+<a name="could-not-execute-skill-because-web-api-skill-response-is-invalid"></a>
## `Error: Could not execute skill because Web API skill response is invalid`
-The skill execution failed because the call to the Web API returned an invalid response. Typically, this class of failure occurs when custom skills are used, in which case you will need to debug your custom code to resolve the issue. If instead the failure is from a built-in skill, file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) to get assistance.
+The skill execution failed because the call to the Web API returned an invalid response. Typically, this class of failure occurs when custom skills are used, in which case you'll need to debug your custom code to resolve the issue. If instead the failure is from a built-in skill, file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) to get assistance.
+
+<a name="skill-did-not-execute-within-the-time-limit"></a>
## `Error: Type of value has a mismatch with column type. Couldn't store in 'xyz' column. Expected type is 'abc'`
-If your data source has a field with a different data type than the field you are trying to map in your index, you may encounter this error. Check your data source field data types and make sure they are [mapped correctly to your index data types](/rest/api/searchservice/data-type-map-for-indexers-in-azure-search).
+If your data source has a field with a different data type than the field you're trying to map in your index, you may encounter this error. Check your data source field data types and make sure they are [mapped correctly to your index data types](/rest/api/searchservice/data-type-map-for-indexers-in-azure-search).
+<a name="skill-did-not-execute-within-the-time-limit"></a>
## `Error: Skill did not execute within the time limit`+ There are two cases under which you may encounter this error message, each of which should be treated differently. Follow the instructions below depending on what skill returned this error for you. ### Built-in Cognitive Service skills+ Many of the built-in cognitive skills, such as language detection, entity recognition, or OCR, are backed by a Cognitive Service API endpoint. Sometimes there are transient issues with these endpoints and a request will time out. For transient issues, there is no remedy except to wait and try again. As a mitigation, consider setting your indexer to [run on a schedule](search-howto-schedule-indexers.md). Scheduled indexing picks up where it left off. Assuming transient issues are resolved, indexing and cognitive skill processing should be able to continue on the next scheduled run.
-If you continue to see this error on the same document for a built-in cognitive skill, file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) to get assistance, as this is not expected.
+If you continue to see this error on the same document for a built-in cognitive skill, file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) to get assistance, as this isn't expected.
### Custom skills
-If you encounter a timeout error with a custom skill you have created, there are a couple of things you can try. First, review your custom skill and ensure that it is not getting stuck in an infinite loop and that it is returning a result consistently. Once you have confirmed that is the case, determine what the execution time of your skill is. If you didn't explicitly set a `timeout` value on your custom skill definition, then the default `timeout` is 30 seconds. If 30 seconds is not long enough for your skill to execute, you may specify a higher `timeout` value on your custom skill definition. Here is an example of a custom skill definition where the timeout is set to 90 seconds:
+
+If you encounter a timeout error with a custom skill, there are a couple of things you can try. First, review your custom skill and ensure that it's not getting stuck in an infinite loop and that it's returning a result consistently. Once you have confirmed that a result is returned, check the duration of execution. If you didn't explicitly set a `timeout` value on your custom skill definition, then the default `timeout` is 30 seconds. If 30 seconds isn't long enough for your skill to execute, you may specify a higher `timeout` value on your custom skill definition. Here's an example of a custom skill definition where the timeout is set to 90 seconds:
```json {
If you encounter a timeout error with a custom skill you have created, there are
} ```
-The maximum value that you can set for the `timeout` parameter is 230 seconds. If your custom skill is unable to execute consistently within 230 seconds, you may consider reducing the `batchSize` of your custom skill so that it will have fewer documents to process within a single execution. If you have already set your `batchSize` to 1, you will need to rewrite the skill to be able to execute in under 230 seconds or otherwise split it into multiple custom skills so that the execution time for any single custom skill is a maximum of 230 seconds. Review the [custom skill documentation](cognitive-search-custom-skill-web-api.md) for more information.
-
+The maximum value that you can set for the `timeout` parameter is 230 seconds. If your custom skill is unable to execute consistently within 230 seconds, you may consider reducing the `batchSize` of your custom skill so that it will have fewer documents to process within a single execution. If you have already set your `batchSize` to 1, you'll need to rewrite the skill to be able to execute in under 230 seconds or otherwise split it into multiple custom skills so that the execution time for any single custom skill is a maximum of 230 seconds. Review the [custom skill documentation](cognitive-search-custom-skill-web-api.md) for more information.
+<a name="could-not-mergeorupload--delete-document-to-the-search-index"></a>
## `Error: Could not 'MergeOrUpload' | 'Delete' document to the search index`+ The document was read and processed, but the indexer could not add it to the search index. This can happen due to: | Reason | Details/Example | Resolution | | | | |
-| A field contains a term that is too large | A term in your document is larger than the [32 KB limit](search-limits-quotas-capacity.md#api-request-limits) | You can avoid this restriction by ensuring the field is not configured as filterable, facetable, or sortable.
+| A field contains a term that is too large | A term in your document is larger than the [32 KB limit](search-limits-quotas-capacity.md#api-request-limits) | You can avoid this restriction by ensuring the field isn't configured as filterable, facetable, or sortable.
| Document is too large to be indexed | A document is larger than the [maximum api request size](search-limits-quotas-capacity.md#api-request-limits) | [How to index large data sets](search-howto-large-index.md) | Document contains too many objects in collection | A collection in your document exceeds the [maximum elements across all complex collections limit](search-limits-quotas-capacity.md#index-limits). `The document with key '1000052' has '4303' objects in collections (JSON arrays). At most '3000' objects are allowed to be in collections across the entire document. Remove objects from collections and try indexing the document again.` | We recommend reducing the size of the complex collection in the document to below the limit and avoid high storage utilization. | Trouble connecting to the target index (that persists after retries) because the service is under other load, such as querying or indexing. | Failed to establish connection to update index. Search service is under heavy load. | [Scale up your search service](search-capacity-planning.md)
The document was read and processed, but the indexer could not add it to the sea
| Failure in the underlying compute/networking resource (rare) | Failed to establish connection to update index. An unknown failure occurred. | Configure indexers to [run on a schedule](search-howto-schedule-indexers.md) to pick up from a failed state. | An indexing request made to the target index was not acknowledged within a timeout period due to network issues. | Could not establish connection to the search index in a timely manner. | Configure indexers to [run on a schedule](search-howto-schedule-indexers.md) to pick up from a failed state. Additionally, try lowering the indexer [batch size](/rest/api/searchservice/create-indexer#parameters) if this error condition persists. -
+<a name="could-not-index-document-because-the-indexer-data-to-index-was-invalid"></a>
## `Error: Could not index document because some of the document's data was not valid`+ The document was read and processed by the indexer, but due to a mismatch in the configuration of the index fields and the data extracted and processed by the indexer, it could not be added to the search index. This can happen due to: | Reason | Details/Example
The document was read and processed by the indexer, but due to a mismatch in the
In all these cases, refer to [Supported Data types](/rest/api/searchservice/supported-data-types) and [Data type map for indexers](/rest/api/searchservice/data-type-map-for-indexers-in-azure-search) to make sure that you build the index schema correctly and have set up appropriate [indexer field mappings](search-indexer-field-mappings.md). The error message will include details that can help track down the source of the mismatch. ## `Error: Integrated change tracking policy cannot be used because table has a composite primary key`
-This applies to SQL tables, and usually happens when the key is either defined as a composite key or, when the table has defined a unique clustered index (as in a SQL index, not an Azure Search index). The main reason is that the key attribute is modified to be a composite primary key in the case of a [unique clustered index](/sql/relational-databases/indexes/clustered-and-nonclustered-indexes-described). In that case, make sure that your SQL table does not have a unique clustered index, or that you map the key field to a field that is guaranteed not to have duplicate values.
+This applies to SQL tables, and usually happens when the key is either defined as a composite key or, when the table has defined a unique clustered index (as in a SQL index, not an Azure Search index). The main reason is that the key attribute is modified to be a composite primary key in the case of a [unique clustered index](/sql/relational-databases/indexes/clustered-and-nonclustered-indexes-described). In that case, make sure that your SQL table doesn't have a unique clustered index, or that you map the key field to a field that is guaranteed not to have duplicate values.
+<a name="could-not-process-document-within-indexer-max-run-time"></a>
## `Error: Could not process document within indexer max run time`
-This error occurs when the indexer is unable to finish processing a single document from the data source within the allowed execution time. [Maximum running time](search-limits-quotas-capacity.md#indexer-limits) is shorter when skillsets are used. When this error occurs, if you have maxFailedItems set to a value other than 0, the indexer bypasses the document on future runs so that indexing can progress. If you cannot afford to skip any document, or if you are seeing this error consistently, consider breaking documents into smaller documents so that partial progress can be made within a single indexer execution.
+This error occurs when the indexer is unable to finish processing a single document from the data source within the allowed execution time. [Maximum running time](search-limits-quotas-capacity.md#indexer-limits) is shorter when skillsets are used. When this error occurs, if you have maxFailedItems set to a value other than 0, the indexer bypasses the document on future runs so that indexing can progress. If you can't afford to skip any document, or if you're seeing this error consistently, consider breaking documents into smaller documents so that partial progress can be made within a single indexer execution.
+<a name="could-not-project-document"></a>
## `Error: Could not project document`
-This error occurs when the indexer is attempting to [project data into a knowledge store](knowledge-store-projection-overview.md) and there was a failure on the attempt. This failure could be consistent and fixable, or it could be a transient failure with the projection output sink that you may need to wait and retry in order to resolve. Here is a set of known failure states and possible resolutions.
+
+This error occurs when the indexer is attempting to [project data into a knowledge store](knowledge-store-projection-overview.md) and there was a failure on the attempt. This failure could be consistent and fixable, or it could be a transient failure with the projection output sink that you may need to wait and retry in order to resolve. Here's a set of known failure states and possible resolutions.
| Reason | Details/Example | Resolution | | | | |
-| Could not update projection blob `'blobUri'` in container `'containerName'` |The specified container does not exist. | The indexer will check if the specified container has been previously created and will create it if necessary, but it only performs this check once per indexer run. This error means that something deleted the container after this step. To resolve this error, try this: leave your storage account information alone, wait for the indexer to finish, and then rerun the indexer. |
+| Could not update projection blob `'blobUri'` in container `'containerName'` |The specified container doesn't exist. | The indexer will check if the specified container has been previously created and will create it if necessary, but it only performs this check once per indexer run. This error means that something deleted the container after this step. To resolve this error, try this: leave your storage account information alone, wait for the indexer to finish, and then rerun the indexer. |
| Could not update projection blob `'blobUri'` in container `'containerName'` |Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host. | This is expected to be a transient failure with Azure Storage and thus should be resolved by rerunning the indexer. If you encounter this error consistently, file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) so it can be investigated further. | | Could not update row `'projectionRow'` in table `'tableName'` | The server is busy. | This is expected to be a transient failure with Azure Storage and thus should be resolved by rerunning the indexer. If you encounter this error consistently, file a [support ticket](https://portal.azure.com/#create/Microsoft.Support) so it can be investigated further. |
+<a name="skill-throttled"></a>
+
+## `Error: The cognitive service for skill '<skill-name>' has been throttled`
+
+Skill execution failed because the call to Cognitive Services was throttled. Typically, this class of failure occurs when too many skills are executing in parallel. If you're using the Microsoft.Search.Documents client library to run the indexer, you can use the [SearchIndexingBufferedSender](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample05_IndexingDocuments.md#searchindexingbufferedsender) to get automatic retry on failed steps. Otherwise, you can [reset and rerun the indexer](search-howto-run-reset-indexers.md).
+
+<a name="could-not-execute-skill-because-a-skill-input-was-invalid"></a>
## `Warning: Skill input was invalid`+ An input to the skill was missing, it has the wrong type, or otherwise, invalid. The warning message will indicate the impact:
-1) `Could not execute skill`
-2) `Skill executed but may have unexpected results`
+
+1. `Could not execute skill`
+
+1.`Skill executed but may have unexpected results`
Cognitive skills have required inputs and optional inputs. For example, the [Key phrase extraction skill](cognitive-search-skill-keyphrases.md) has two required inputs `text`, `languageCode`, and no optional inputs. Custom skill inputs are all considered optional inputs.
-If any required inputs are missing or if any input is not the right type, the skill gets skipped and generates a warning. Skipped skills do not generate any outputs, so if other skills use outputs of the skipped skill they may generate additional warnings.
+If required inputs are missing or if the input isn't the right type, the skill gets skipped and generates a warning. Skipped skills don't generate outputs. If downstream skills consume the outputs of the skipped skill, they may generate additional warnings.
-If an optional input is missing, the skill will still run but may produce unexpected output due to the missing input.
+If an optional input is missing, the skill still runs but may produce unexpected output due to the missing input.
-In both cases, this warning may be expected due to the shape of your data. For example, if you have a document containing information about people with the fields `firstName`, `middleName`, and `lastName`, you may have some documents which do not have an entry for `middleName`. If you pass `middleName` as an input to a skill in the pipeline, then it is expected that this skill input may be missing some of the time. You will need to evaluate your data and scenario to determine whether or not any action is required as a result of this warning.
+In both cases, this warning may be expected due to the shape of your data. For example, if you have a document containing information about people with the fields `firstName`, `middleName`, and `lastName`, you may have some documents which don't have an entry for `middleName`. If you pass `middleName` as an input to a skill in the pipeline, then it's expected that this skill input may be missing some of the time. You will need to evaluate your data and scenario to determine whether or not any action is required as a result of this warning.
If you want to provide a default value in case of missing input, you can use the [Conditional skill](cognitive-search-skill-conditional.md) to generate a default value and then use the output of the [Conditional skill](cognitive-search-skill-conditional.md) as the skill input.
If you want to provide a default value in case of missing input, you can use the
| Reason | Details/Example | Resolution | | | | | | Skill input is the wrong type | "Required skill input was not of the expected type `String`. Name: `text`, Source: `/document/merged_content`." "Required skill input was not of the expected format. Name: `text`, Source: `/document/merged_content`." "Cannot iterate over non-array `/document/normalized_images/0/imageCelebrities/0/detail/celebrities`." "Unable to select `0` in non-array `/document/normalized_images/0/imageCelebrities/0/detail/celebrities`" | Certain skills expect inputs of particular types, for example [Sentiment skill](cognitive-search-skill-sentiment-v3.md) expects `text` to be a string. If the input specifies a non-string value, then the skill doesn't execute and generates no outputs. Ensure your data set has input values uniform in type, or use a [Custom Web API skill](cognitive-search-custom-skill-web-api.md) to preprocess the input. If you're iterating the skill over an array, check the skill context and input have `*` in the correct positions. Usually both the context and input source should end with `*` for arrays. |
-| Skill input is missing | `Required skill input is missing. Name: text, Source: /document/merged_content` `Missing value /document/normalized_images/0/imageTags.` `Unable to select 0 in array /document/pages of length 0.` | If all your documents get this warning, most likely there is a typo in the input paths and you should double check property name casing, extra or missing `*` in the path, and make sure that the documents from the data source provide the required inputs. |
+| Skill input is missing | `Required skill input is missing. Name: text, Source: /document/merged_content` `Missing value /document/normalized_images/0/imageTags.` `Unable to select 0 in array /document/pages of length 0.` | If this warning occurs for all documents, there could be a typo in the input paths. Check the property name casing. Check for an extra or missing `*` in the path. Verify that the documents from the data source provide the required inputs. |
| Skill language code input is invalid | Skill input `languageCode` has the following language codes `X,Y,Z`, at least one of which is invalid. | See more details below. |
+<a name="skill-input-languagecode-has-the-following-language-codes-xyz-at-least-one-of-which-is-invalid"></a>
## `Warning: Skill input 'languageCode' has the following language codes 'X,Y,Z', at least one of which is invalid.`
-One or more of the values passed into the optional `languageCode` input of a downstream skill is not supported. This can occur if you are passing the output of the [LanguageDetectionSkill](cognitive-search-skill-language-detection.md) to subsequent skills, and the output consists of more languages than are supported in those downstream skills.
-Note that you may also get a warning similar to this one if an invalid `countryHint` input gets passed to the LanguageDetectionSkill. If that happens, validate that the field you are using from your data source for that input contains valid ISO 3166-1 alpha-2 two letter country codes. If some are valid and some are invalid, continue with the following guidance but replace `languageCode` with `countryHint` and `defaultLanguageCode` with `defaultCountryHint` to match your use case.
+One or more of the values passed into the optional `languageCode` input of a downstream skill isn't supported. This can occur if you're passing the output of the [LanguageDetectionSkill](cognitive-search-skill-language-detection.md) to subsequent skills, and the output consists of more languages than are supported in those downstream skills.
+
+Note that you may also get a warning similar to this one if an invalid `countryHint` input gets passed to the LanguageDetectionSkill. If that happens, validate that the field you're using from your data source for that input contains valid ISO 3166-1 alpha-2 two letter country codes. If some are valid and some are invalid, continue with the following guidance but replace `languageCode` with `countryHint` and `defaultLanguageCode` with `defaultCountryHint` to match your use case.
If you know that your data set is all in one language, you should remove the [LanguageDetectionSkill](cognitive-search-skill-language-detection.md) and the `languageCode` skill input and use the `defaultLanguageCode` skill parameter for that skill instead, assuming the language is supported for that skill.
-If you know that your data set contains multiple languages and thus you need the [LanguageDetectionSkill](cognitive-search-skill-language-detection.md) and `languageCode` input, consider adding a [ConditionalSkill](cognitive-search-skill-conditional.md) to filter out the text with languages that are not supported before passing in the text to the downstream skill. Here is an example of what this might look like for the EntityRecognitionSkill:
+If you know that your data set contains multiple languages and thus you need the [LanguageDetectionSkill](cognitive-search-skill-language-detection.md) and `languageCode` input, consider adding a [ConditionalSkill](cognitive-search-skill-conditional.md) to filter out the text with languages that are not supported before passing in the text to the downstream skill. Here's an example of what this might look like for the EntityRecognitionSkill:
```json {
Here are some references for the currently supported languages for each of the s
* [Translator supported languages](../cognitive-services/translator/language-support.md) * [Text SplitSkill](cognitive-search-skill-textsplit.md) supported languages: `da, de, en, es, fi, fr, it, ko, pt`
+<a name="skill-input-was-truncated"></a>
## `Warning: Skill input was truncated`
-Cognitive skills have limits to the length of text that can be analyzed at once. If the text input of these skills is over that limit, we will truncate the text to meet the limit, and then perform the enrichment on that truncated text. This means that the skill is executed, but not over all of your data.
-In the example `LanguageDetectionSkill` below, the `'text'` input field may trigger this warning if it is over the character limit. You can find the skill input limits in the [skills documentation](cognitive-search-predefined-skills.md).
+Cognitive skills limit the length of text that can be analyzed at one time. If the text input exceeds the limit, the text is truncated before it's enriched. The skill executes, but not over all of your data.
+
+In the example `LanguageDetectionSkill` below, the `'text'` input field might trigger this warning if the input is over the character limit. Input limits can be found in the [skills reference documentation](cognitive-search-predefined-skills.md).
```json {
In the example `LanguageDetectionSkill` below, the `'text'` input field may trig
If you want to ensure that all text is analyzed, consider using the [Split skill](cognitive-search-skill-textsplit.md). -
+<a name="web-api-skill-response-contains-warnings"></a>
## `Warning: Web API skill response contains warnings`
-The indexer was able to run a skill in the skillset, but the response from the Web API request indicated there were warnings during execution. Review the warnings to understand how your data is impacted and whether or not, action is required.
+The indexer ran the skill in the skillset, but the response from the Web API request indicates there are warnings. Review the warnings to understand how your data is impacted and whether further action is required.
+<a name="the-current-indexer-configuration-does-not-support-incremental-progress"></a>
## `Warning: The current indexer configuration does not support incremental progress`+ This warning only occurs for Cosmos DB data sources. Incremental progress during indexing ensures that if indexer execution is interrupted by transient failures or execution time limit, the indexer can pick up where it left off next time it runs, instead of having to re-index the entire collection from scratch. This is especially important when indexing large collections. The ability to resume an unfinished indexing job is predicated on having documents ordered by the `_ts` column. The indexer uses the timestamp to determine which document to pick up next. If the `_ts` column is missing or if the indexer can't determine if a custom query is ordered by it, the indexer starts at beginning and you'll see this warning.
-It is possible to override this behavior, enabling incremental progress and suppressing this warning by using the `assumeOrderByHighWaterMarkColumn` configuration property.
+It's possible to override this behavior, enabling incremental progress and suppressing this warning by using the `assumeOrderByHighWaterMarkColumn` configuration property.
For more information, see [Incremental progress and custom queries](search-howto-index-cosmosdb.md#IncrementalProgress).
+<a name="some-data-was-lost-during projection-row-x-in-table-y-has-string-property-z-which-was-too-long"></a>
## `Warning: Some data was lost during projection. Row 'X' in table 'Y' has string property 'Z' which was too long.` The [Table Storage service](https://azure.microsoft.com/services/storage/tables) has limits on how large [entity properties](/rest/api/storageservices/understanding-the-table-service-data-model#property-types) can be. Strings can have 32,000 characters or less. If a row with a string property longer than 32,000 characters is being projected, only the first 32,000 characters are preserved. To work around this issue, avoid projecting rows with string properties longer than 32,000 characters.
+<a name="truncated-extracted-text-to-x-characters"></a>
## `Warning: Truncated extracted text to X characters`
-Indexers limit how much text can be extracted from any one document. This limit depends on the pricing tier: 32,000 characters for Free tier, 64,000 for Basic, 4 million for Standard, 8 million for Standard S2, and 16 million for Standard S3. Text that was truncated will not be indexed. To avoid this warning, try breaking apart documents with large amounts of text into multiple, smaller documents.
+
+Indexers limit how much text can be extracted from any one document. This limit depends on the pricing tier: 32,000 characters for Free tier, 64,000 for Basic, 4 million for Standard, 8 million for Standard S2, and 16 million for Standard S3. Text that was truncated won't be indexed. To avoid this warning, try breaking apart documents with large amounts of text into multiple, smaller documents.
For more information, see [Indexer limits](search-limits-quotas-capacity.md#indexer-limits).
+<a name="could-not-map-output-field-x-to-search-index"></a>
## `Warning: Could not map output field 'X' to search index`+ Output field mappings that reference non-existent/null data will produce warnings for each document and result in an empty index field. To work around this issue, double-check your output field-mapping source paths for possible typos, or set a default value using the [Conditional skill](cognitive-search-skill-conditional.md#sample-skill-definition-2-set-a-default-value-for-a-value-that-doesnt-exist). See [Output field mapping](cognitive-search-output-field-mapping.md) for details. | Reason | Details/Example | Resolution | | | | |
-| Cannot iterate over non-array | "Cannot iterate over non-array `/document/normalized_images/0/imageCelebrities/0/detail/celebrities`." | This error occurs when the output is not an array. If you think the output should be an array, check the indicated output source field path for errors. For example, you might have a missing or extra `*` in the source field name. It's also possible that the input to this skill is null, resulting in an empty array. Find similar details in [Skill Input was Invalid](cognitive-search-common-errors-warnings.md#warning-skill-input-was-invalid) section. |
-| Unable to select `0` in non-array | "Unable to select `0` in non-array `/document/pages`." | This could happen if the skills output does not produce an array and the output source field name has array index or `*` in its path. Double check the paths provided in the output source field names and the field value for the indicated field name. Find similar details in [Skill Input was Invalid](cognitive-search-common-errors-warnings.md#warning-skill-input-was-invalid) section. |
+| Cannot iterate over non-array | "Cannot iterate over non-array `/document/normalized_images/0/imageCelebrities/0/detail/celebrities`." | This error occurs when the output isn't an array. If you think the output should be an array, check the indicated output source field path for errors. For example, you might have a missing or extra `*` in the source field name. It's also possible that the input to this skill is null, resulting in an empty array. Find similar details in [Skill Input was Invalid](cognitive-search-common-errors-warnings.md#warning-skill-input-was-invalid) section. |
+| Unable to select `0` in non-array | "Unable to select `0` in non-array `/document/pages`." | This could happen if the skills output doesn't produce an array and the output source field name has array index or `*` in its path. Double check the paths provided in the output source field names and the field value for the indicated field name. Find similar details in [Skill Input was Invalid](cognitive-search-common-errors-warnings.md#warning-skill-input-was-invalid) section. |
+<a name="the-data-change-detection-policy-is-configured-to-use-key-column-x"></a>
## `Warning: The data change detection policy is configured to use key column 'X'` [Data change detection policies](/rest/api/searchservice/create-data-source#data-change-detection-policies) have specific requirements for the columns they use to detect change. One of these requirements is that this column is updated every time the source item is changed. Another requirement is that the new value for this column is greater than the previous value. Key columns don't fulfill this requirement because they don't change on every update. To work around this issue, select a different column for the change detection policy.
+<a name="document-text-appears-to-be-utf-16-encoded-but-is-missing-a-byte-order-mark"></a>
## `Warning: Document text appears to be UTF-16 encoded, but is missing a byte order mark` The [indexer parsing modes](/rest/api/searchservice/create-indexer#blob-configuration-parameters) need to know how text is encoded before parsing it. The two most common ways of encoding text are UTF-16 and UTF-8. UTF-8 is a variable-length encoding where each character is between 1 byte and 4 bytes long. UTF-16 is a fixed-length encoding where each character is 2 bytes long. UTF-16 has two different variants, "big endian" and "little endian". Text encoding is determined by a "byte order mark", a series of bytes before the text.
If no byte order mark is present, the text is assumed to be encoded as UTF-8.
To work around this warning, determine what the text encoding for this blob is and add the appropriate byte order mark.
+<a name="cosmos-db-collection-has-a-lazy-indexing-policy"></a>
## `Warning: Cosmos DB collection 'X' has a Lazy indexing policy. Some data may be lost`+ Collections with [Lazy](../cosmos-db/index-policy.md#indexing-mode) indexing policies can't be queried consistently, resulting in your indexer missing data. To work around this warning, change your indexing policy to Consistent. ## `Warning: The document contains very long words (longer than 64 characters). These words may result in truncated and/or unreliable model predictions.`
-This warning is passed from the Language service of Azure Cognitive Services. In some cases, it is safe to ignore this warning, such as when your document contains a long URL (which likely isn't a key phrase or driving sentiment, etc.). Be aware that when a word is longer than 64 characters, it will be truncated to 64 characters which can affect model predictions.
+
+This warning is passed from the Language service of Azure Cognitive Services. In some cases, it's safe to ignore this warning, such as when your document contains a long URL (which likely isn't a key phrase or driving sentiment, etc.). Be aware that when a word is longer than 64 characters, it will be truncated to 64 characters which can affect model predictions.
search Index Ranking Similarity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-ranking-similarity.md
Depending on the age of your search service, Azure Cognitive Search supports two
+ An *Okapi BM25* algorithm, used in all search services created after July 15, 2020 + A *classic similarity* algorithm, used by all search services created before July 15, 2020
-BM25 ranking is the default because it tends to produce search rankings that align better with user expectations. It includes [parameters](#set-bm25-parameters) for tuning results based on factors such as document size.
-
-For search services created after July 2020, BM25 is the sole similarity algorithm. If you try to set similarity to ClassicSimilarity on a new service, an HTTP 400 error will be returned because that algorithm is not supported by the service.
+BM25 ranking is the default because it tends to produce search rankings that align better with user expectations. It includes [parameters](#set-bm25-parameters) for tuning results based on factors such as document size. For search services created after July 2020, BM25 is the sole similarity algorithm. If you try to set "similarity" to ClassicSimilarity on a new service, an HTTP 400 error will be returned because that algorithm is not supported by the service.
For older services, classic similarity remains the default algorithm. Older services can [upgrade to BM25](#enable-bm25-scoring-on-older-services) on a per-index basis. When switching from classic to BM25, you can expect to see some differences how search results are ordered.
For older services, classic similarity remains the default algorithm. Older serv
BM25 similarity adds two parameters to control the relevance score calculation. To set "similarity" parameters, issue a [Create or Update Index](/rest/api/searchservice/create-index) request as illustrated by the following example.
-Because Cognitive Search won't allow updates to a live index, you'll need to take the index offline so that the parameters can be added. Indexing and query requests will fail while the index is offline. The duration of the outage is the amount of time it takes to update the index, usually no more than several seconds. When the update is complete, the index comes back automatically. To take the index offline, append the "allowIndexDowntime=true" URI parameter on the request that sets the "similarity" property:
- ```http
-PUT https://[search service name].search.windows.net/indexes/[index name]?api-version=2020-06-30&allowIndexDowntime=true
+PUT [service-name].search.windows.net/indexes/[index-name]?api-version=2020-06-30&allowIndexDowntime=true
{ "similarity": { "@odata.type": "#Microsoft.Azure.Search.BM25Similarity",
PUT https://[search service name].search.windows.net/indexes/[index name]?api-ve
} ```
+Because Cognitive Search won't allow updates to a live index, you'll need to take the index offline so that the parameters can be added. Indexing and query requests will fail while the index is offline. The duration of the outage is the amount of time it takes to update the index, usually no more than several seconds. When the update is complete, the index comes back automatically. To take the index offline, append the "allowIndexDowntime=true" URI parameter on the request that sets the "similarity" property.
+ ### BM25 property reference | Property | Type | Description |
PUT https://[search service name].search.windows.net/indexes/[index name]?api-ve
## Enable BM25 scoring on older services
-If you are running a search service that was created from March 2014 through July 15, 2020, you can enable BM25 by setting a "similarity" property on new indexes. The property is only exposed on new indexes, so if want BM25 on an existing index, you must drop and [rebuild the index](search-howto-reindex.md) with a "similarity" property set to "Microsoft.Azure.Search.BM25Similarity".
+If you're running a search service that was created from March 2014 through July 15, 2020, you can enable BM25 by setting a "similarity" property on new indexes. The property is only exposed on new indexes, so if want BM25 on an existing index, you must drop and [rebuild the index](search-howto-reindex.md) with a "similarity" property set to "Microsoft.Azure.Search.BM25Similarity".
Once an index exists with a "similarity" property, you can switch between `BM25Similarity` or `ClassicSimilarity`.
The following links describe the Similarity property in the Azure SDKs.
### REST example
-You can also use the [REST API](/rest/api/searchservice/create-index), as the following example illustrates:
+You can also use the [REST API](/rest/api/searchservice/create-index). The following example creates a new index with the "similarity" property set to BM25:
```http
-PUT https://[search service name].search.windows.net/indexes/[index name]?api-version=2020-06-30
+PUT [service-name].search.windows.net/indexes/[index name]?api-version=2020-06-30
{ "name": "indexName", "fields": [
search Index Similarity And Scoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-similarity-and-scoring.md
Last updated 06/22/2022
# Similarity and scoring in Azure Cognitive Search
-This article describes relevance scoring and the similarity ranking algorithms used to rank search results in Azure Cognitive Search. A relevance score applies to matches returned in a [full text search query](search-lucene-query-architecture.md). Filter queries, autocomplete and suggested queries, wildcard search or fuzzy search queries are not scored or ranked.
+This article describes relevance scoring and the similarity ranking algorithms used to compute search scores in Azure Cognitive Search. A relevance score applies to matches returned in [full text search](search-lucene-query-architecture.md), where the most relevant matches appear first. Filter queries, autocomplete and suggested queries, wildcard search or fuzzy search queries are not scored or ranked for relevance.
In Azure Cognitive Search, you can tune search relevance and boost search scores through these mechanisms:
In Azure Cognitive Search, you can tune search relevance and boost search scores
+ Scoring profiles + Custom scoring logic enabled through the *featuresMode* parameter
-## Relevance scoring
+> [!NOTE]
+> Matches are scored and ranked from high to low. The score is returned as "@search.score". By default, the top 50 are returned in the response, but you can use the **$top** parameter to return a smaller or larger number of items (up to 1000 in a single response), and **$skip** to get the next set of results.
-Relevance scoring refers to the computation of a search score for every item returned in search results for full text search queries. The score is an indicator of an item's relevance in the context of the current query. The higher the score, the more relevant the item.
+## Relevance scoring
-In search results, items are rank ordered from high to low, based on the search scores calculated for each item. The score is returned in the response as "@search.score" on every document. By default, the top 50 are returned in the response, but you can use the **$top** parameter to return a smaller or larger number of items (up to 1000 in a single response), and **$skip** to get the next set of results.
+Relevance scoring refers to the computation of a search score that serves as an indicator of an item's relevance in the context of the current query. The higher the score, the more relevant the item.
-The search score is computed based on statistical properties of the data and the query. Azure Cognitive Search finds documents that match on search terms (some or all, depending on [searchMode](/rest/api/searchservice/search-documents#query-parameters)), favoring documents that contain many instances of the search term. The search score goes up even higher if the term is rare across the data index, but common within the document. The basis for this approach to computing relevance is known as *TF-IDF or* term frequency-inverse document frequency.
+The search score is computed based on statistical properties of the string input and the query itself. Azure Cognitive Search finds documents that match on search terms (some or all, depending on [searchMode](/rest/api/searchservice/search-documents#query-parameters)), favoring documents that contain many instances of the search term. The search score goes up even higher if the term is rare across the data index, but common within the document. The basis for this approach to computing relevance is known as *TF-IDF or* term frequency-inverse document frequency.
-Search score values can be repeated throughout a result set. When multiple hits have the same search score, the ordering of the same scored items is not defined, and is not stable. Run the query again, and you might see items shift position, especially if you are using the free service or a billable service with multiple replicas. Given two items with an identical score, there is no guarantee which one appears first.
+Search scores can be repeated throughout a result set. When multiple hits have the same search score, the ordering of the same scored items is undefined and not stable. Run the query again, and you might see items shift position, especially if you are using the free service or a billable service with multiple replicas. Given two items with an identical score, there is no guarantee which one appears first.
If you want to break the tie among repeating scores, you can add an **$orderby** clause to first order by score, then order by another sortable field (for example, `$orderby=search.score() desc,Rating desc`). For more information, see [$orderby](search-query-odata-orderby.md).
search Monitor Azure Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/monitor-azure-cognitive-search.md
The system information above information can also be read from the Azure portal.
On Azure portal pages, check the Usage and Monitoring tabs for counts and metrics. Commands on the left-navigation provide access to configuration and data exploration pages.
- ![Azure Monitor integration in a search service](./media/search-monitor-usage/azure-monitor-search.png
- "Azure Monitor integration in a search service")
+ ![Azure Monitor integration in a search service](./media/search-monitor-usage/azure-monitor-search.png "Azure Monitor integration in a search service")
> [!NOTE] > Cognitive Search does not monitor individual user access to search data (sometimes referred to as document-level or row-level access). Indexing and query requests originate from a client application that presents either an admin or query API key on the request.
search Search Indexer Howto Access Ip Restricted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-ip-restricted.md
This article explains how to find the IP address of your search service and conf
## Get a search service IP address
-1. Determine the fully qualified domain name (FQDN) of your search service. This will look like `<search-service-name>.search.windows.net`. You can find the FQDN by looking up your search service on the Azure portal.
+1. Get the fully qualified domain name (FQDN) of your search service. This will look like `<search-service-name>.search.windows.net`. You can find the FQDN by looking up your search service on the Azure portal.
:::image type="content" source="media\search-indexer-howto-secure-access\search-service-portal.png" alt-text="Screenshot of the search service Overview page." border="true":::
This article explains how to find the IP address of your search service and conf
1. Copy the IP address so that you can specify it on an inbound rule in the next step. In the example below, the IP address that you should copy is "150.0.0.1".
- ```azurepowershell
+ ```bash
nslookup contoso.search.windows.net Server: server.example.org Address: 10.50.10.50
This article explains how to find the IP address of your search service and conf
## Get the Azure portal IP address
-If you're using the Azure portal or the [Import Data wizard](search-import-data-portal.md) to create an indexer, you'll need an inbound rule for the Azure portal.
+If you're using the Azure portal or the [Import Data wizard](search-import-data-portal.md) to create an indexer, you'll need an inbound rule for the Azure portal as well.
-To get the portal IP address, perform `nslookup` on `stamp2.ext.search.windows.net`, which is the domain of the traffic manager.
+To get the portal's IP address, perform `nslookup` (or `ping`) on `stamp2.ext.search.windows.net`, which is the domain of the traffic manager. For nslookup, the IP address is visible in the "Non-authoritative answer" portion of the response.
-For nslookup, the IP address be visible in the "Non-authoritative answer" portion of the response. For ping, the request will time out, but the IP address will be visible in the response. For example, in the message "Pinging azsyrie.northcentralus.cloudapp.azure.com [52.252.175.48]", the IP address is "52.252.175.48".
+In the example below, the IP address that you should copy is "52.252.175.48".
+
+```bash
+$ nslookup stamp2.ext.search.windows.net
+Server: ZenWiFi_ET8-0410
+Address: 192.168.50.1
+
+Non-authoritative answer:
+Name: azsyrie.northcentralus.cloudapp.azure.com
+Address: 52.252.175.48
+Aliases: stamp2.ext.search.windows.net
+ azs-ux-prod.trafficmanager.net
+ azspncuux.management.search.windows.net
+```
Clusters in different regions connect to different traffic managers. Regardless of the domain name, the IP address returned from the ping is the correct one to use when defining an inbound firewall rule for the Azure portal in your region.
+For ping, the request will time out, but the IP address will be visible in the response. For example, in the message "Pinging azsyrie.northcentralus.cloudapp.azure.com [52.252.175.48]", the IP address is "52.252.175.48".
+ ## Get IP addresses for "AzureCognitiveSearch" service tag
-We also require customers to create an inbound rule that allows requests from the [multi-tenant execution environment](search-indexer-securing-resources.md#indexer-execution-environment) to ensure we optimize the resource availability for search services. This step explains how to get the range of IP addresses needed for this inbound rule.
+You'll also need to create an inbound rule that allows requests from the [multi-tenant execution environment](search-indexer-securing-resources.md#indexer-execution-environment). This environment is managed by Microsoft and it's used to offload processing intensive jobs that could otherwise overwhelm your search service. This section explains how to get the range of IP addresses needed to create this inbound rule.
-An IP address range is defined for each region that supports Azure Cognitive Search. You can get this IP address range from the `AzureCognitiveSearch` service tag.
+An IP address range is defined for each region that supports Azure Cognitive Search. You'll need to specify the full range to ensure the success of requests originating from the multi-tenant execution environment.
-1. Get the IP address ranges for the `AzureCognitiveSearch` service tag using either the [discovery API](../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api) or the [downloadable JSON file](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files).
+You can get this IP address range from the `AzureCognitiveSearch` service tag.
-1. If the search service is the Azure Public cloud, download the [Azure Public JSON file](https://www.microsoft.com/download/details.aspx?id=56519).
+1. Use either the [discovery API](../virtual-network/service-tags-overview.md#use-the-service-tag-discovery-api) or the [downloadable JSON file](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files).
+
+ If the search service is the Azure Public cloud, download the [Azure Public JSON file](https://www.microsoft.com/download/details.aspx?id=56519).
1. Open the JSON file and search for "AzureCognitiveSearch". For a search service in WestUS2, the IP addresses for the multi-tenant indexer execution environment are:
An IP address range is defined for each region that supports Azure Cognitive Sea
1. For IP addresses have the "/32" suffix, drop the "/32" (40.91.93.84/32 becomes 40.91.93.84 in the rule definition). All other IP addresses can be used verbatim.
+1. Copy all of the IP addresses for the region.
+ ## Add IP addresses to IP firewall rules
-Now that you have the necessary IP addresses, you can set up the inbound rule. The easiest way to add IP address ranges to a storage account's firewall rule is through the Azure portal.
+Now that you have the necessary IP addresses, you can set up the inbound rules. The easiest way to add IP address ranges to a storage account's firewall rule is through the Azure portal.
1. Locate the storage account on the portal and open **Networking** on the left navigation pane.
Now that you have the necessary IP addresses, you can set up the inbound rule. T
:::image type="content" source="media\search-indexer-howto-secure-access\storage-firewall.png" alt-text="Screenshot of Azure Storage Firewall and virtual networks page" border="true":::
-1. Add the IP addresses obtained previously in the address range and select **Save**. You should have rules for the search service, Azure portal (optional), plus all of the IP ranges for the "AzureCognitiveSearch" service tag for your region
+1. Add the IP addresses obtained previously in the address range and select **Save**. You should have rules for the search service, Azure portal (optional), plus all of the IP addresses for the "AzureCognitiveSearch" service tag for your region.
:::image type="content" source="media\search-indexer-howto-secure-access\storage-firewall-ip.png" alt-text="Screenshot of the IP address section of the page." border="true":::
-It can take five to ten minutes for the firewall rules to be updated, after which indexers should be able to access the data in the storage account.
+It can take five to ten minutes for the firewall rules to be updated, after which indexers should be able to access storage account data behind the firewall.
## Next Steps
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md
Many Azure resources, such as Azure storage accounts, can be configured to accep
For [Azure Storage](../storage/common/storage-network-security.md?tabs=azure-portal), if both the storage account and the search service are in the same region, outbound traffic uses a private IP address to communicate to storage and occurs over the Microsoft backbone network. For this scenario, you can omit private endpoints through Azure Cognitive Search. For other Azure PaaS resources, we suggest that you review the networking documentation for those resources to determine whether a private endpoint is helpful.
-To create a shared private link, use the Azure portal or the [Create Or Update Shared Private Link](/rest/api/searchmanagement/2020-08-01/shared-private-link-resources/create-or-update) operation in the Azure Cognitive Search Management REST API.
+To create a private endpoint that an indexer can use, use the Azure portal or the [Create Or Update Shared Private Link](/rest/api/searchmanagement/2020-08-01/shared-private-link-resources/create-or-update) operation in the Azure Cognitive Search Management REST API. A private endpoint that's used by your search service is created using Cognitive Search APIs or the portal pages for Azure Cognitive Search.
## Terminology
search Search Indexer Securing Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-securing-resources.md
Last updated 06/20/2022
# Indexer access to content protected by Azure network security
-If your search application requirements include an Azure virtual network, this concept article explains how a search indexer can access content that's protected by network security. It describes the outbound traffic patterns and indexer execution environments. It also covers the network protections supported by Cognitive Search and factors that might influence your approach. Finally, because Azure Storage is used for both data access and persistent storage, this article also covers network considerations that are specific to search and storage connectivity.
+If your search application requirements include an Azure virtual network, this concept article explains how a search indexer can access content that's protected by network security. It describes the outbound traffic patterns and indexer execution environments. It also covers the network protections supported by Cognitive Search and factors that might influence your security strategy. Finally, because Azure Storage is used for both data access and persistent storage, this article also covers network considerations that are specific to search and storage connectivity.
Looking for step-by-step instructions instead? See [How to configure firewall rules to allow indexer access](search-indexer-howto-access-ip-restricted.md) or [How to make outbound connections through a private endpoint](search-indexer-howto-access-private.md).
When integrating Azure Cognitive Search into a solution that runs on a virtual n
Given the above constrains, your choices for achieving search integration in a virtual network are: -- Configure an inbound firewall rule on your Azure resource that admits indexer requests for data.
+- Configure an inbound firewall rule on your Azure PaaS resource that admits indexer requests for data.
-- Configure an outbound connection that makes indexer connections using a [private endpoint](../private-link/private-endpoint-overview.md).
+- Configure an outbound connection from Search that makes indexer connections using a [private endpoint](../private-link/private-endpoint-overview.md).
For a private endpoint, the search service connection to your protected resource is through a *shared private link*. A shared private link is an [Azure Private Link](../private-link/private-link-overview.md) resource that's created, managed, and used from within Cognitive Search. If your resources are fully locked down (running on a protected virtual network, or otherwise not available over a public connection), a private endpoint is your only choice.
search Search Query Fuzzy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-fuzzy.md
Collectively, the graphs are submitted as match criteria against tokens in the i
## Indexing for fuzzy search
-Make sure the index includes text fields that are conducive to fuzzy search, such as names, categories, descriptions, or tags.
+String fields that are attributed as "searchable" are candidates for fuzzy search.
-Analyzers aren't used to create an expansion graph, but that doesn't mean analyzers should be ignored in fuzzy search scenarios. Analyzers are important for tokenization during indexing, where tokens are used for both full text search and for matching against the graph.
+Analyzers aren't used to create an expansion graph, but that doesn't mean analyzers should be ignored in fuzzy search scenarios. Analyzers are important for tokenization during indexing, where tokens in the inverted indexes are used for matching against the graph.
-As always, if test queries aren't producing the matches you expect, you could try varying the indexing analyzer, setting it to a [language analyzer](index-add-language-analyzers.md), to see if you get better results. Some languages, particularly those with vowel mutations, can benefit from the inflection and irregular word forms generated by the Microsoft natural language processors. In some cases, using the right language analyzer can make a difference in whether a term is tokenized in a way that is compatible with the value provided by the user.
+As always, if test queries aren't producing the matches you expect, experiment with different indexing analyzers. For example, try a [language analyzer](index-add-language-analyzers.md) to see if you get better results. Some languages, particularly those with vowel mutations, can benefit from the inflection and irregular word forms generated by the Microsoft natural language processors. In some cases, using the right language analyzer can make a difference in whether a term is tokenized in a way that is compatible with the value provided by the user.
-## How to use fuzzy search
+## How to invoke fuzzy search
-Fuzzy queries are constructed using the full Lucene query syntax, invoking the [full Lucene query parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/classic/package-summary.html).
+Fuzzy queries are constructed using the full Lucene query syntax, invoking the [full Lucene query parser](https://lucene.apache.org/core/6_6_1/queryparser/org/apache/lucene/queryparser/classic/package-summary.html), and appending a tilde character `~` after each whole term entered by the user.
+
+Here's an example of a query request that invokes fuzzy search. It includes four terms, two of which are misspelled:
```http POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2020-06-30 {
- "search": "seatle~2",
+ "search": "seatle~ waterfront~ view~ hotle~",
"queryType": "full", "searchMode": "any",
- "searchFields": "HotelName, Address/City",
- "select": "HotelName, Address/City,",
+ "searchFields": "HotelName, Description",
+ "select": "HotelName, Description, Address/City,",
"count": "true" } ``` 1. Set the query type to the full Lucene syntax (`queryType=full`).
-1. Optionally, scope the request to specific fields, using this parameter (`searchFields=<field1,field2>`).
-
-1. Provide the query string. An expansion graph will be created for every term in the query input. Append the tilde (`~`) operator at the end of each whole term (`search=<string>~`).
+1. Provide the query string where each term is followed by a tilde (`~`) operator at the end of each whole term (`search=<string>~`). An expansion graph will be created for every term in the query input.
Include an optional parameter, a number between 0 and 2 (default), if you want to specify the edit distance (`~1`). For example, "blue~" or "blue~1" would return "blue", "blues", and "glue".
-In Azure Cognitive Search, besides the term and distance (maximum of 2), there are no other parameters to set on the query.
+Optionally, you can improve query performance by scoping the request to specific fields. Use the `searchFields` parameter to specify which fields to search. You can also use the `select` property to specify which fields are returned in the query response.
## Testing fuzzy search
security Antimalware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/antimalware.md
The scenarios to enable and configure antimalware, including monitoring for Azur
To enable and configure Microsoft Antimalware for Azure Virtual Machines using the Azure portal while provisioning a Virtual Machine, follow the steps below:
-1. Sign in to the Azure portal at <https://portal.azure.com>.
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. To create a new virtual machine, navigate to **Virtual machines**, select **Add**, and choose **Windows Server**. 3. Select the version of Windows server that you would like to use. 4. Select **Create**.
sentinel Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/entities.md
Learn [which identifiers strongly identify an entity](entities-reference.md).
## Entity pages
-When you encounter any entity (currently limited to users and hosts) in a search, an alert, or an investigation, you can select the entity and be taken to an **entity page**, a datasheet full of useful information about that entity. The types of information you will find on this page include basic facts about the entity, a timeline of notable events related to this entity and insights about the entity's behavior.
+When you encounter a user or host entity (IP address entities are in preview) in an entity search, an alert, or an investigation, you can select the entity and be taken to an **entity page**, a datasheet full of useful information about that entity. The types of information you will find on this page include basic facts about the entity, a timeline of notable events related to this entity and insights about the entity's behavior.
Entity pages consist of three parts: -- The left-side panel contains the entity's identifying information, collected from data sources like Azure Active Directory, Azure Monitor, Microsoft Defender for Cloud, and Microsoft Defender for Cloud.
+- The left-side panel contains the entity's identifying information, collected from data sources like Azure Active Directory, Azure Monitor, Microsoft Defender for Cloud, CEF/Syslog, and Microsoft 365 Defender.
-- The center panel shows a graphical and textual timeline of notable events related to the entity, such as alerts, bookmarks, and activities. Activities are aggregations of notable events from Log Analytics. The queries that detect those activities are developed by Microsoft security research teams.
+- The center panel shows a graphical and textual timeline of notable events related to the entity, such as alerts, bookmarks, [anomalies](soc-ml-anomalies.md), and activities. Activities are aggregations of notable events from Log Analytics. The queries that detect those activities are developed by Microsoft security research teams, and you can now [add your own custom queries to detect activities](customize-entity-activities.md) of your choosing.
-- The right-side panel presents behavioral insights on the entity. These insights help to quickly identify anomalies and security threats. The insights are developed by Microsoft security research teams, and are based on anomaly detection models.
+- The right-side panel presents behavioral insights on the entity. These insights help to quickly identify [anomalies](soc-ml-anomalies.md) and security threats. The insights are developed by Microsoft security research teams, and are based on anomaly detection models.
+
+> [!NOTE]
+> The **IP address entity page** (now in preview) contains **geolocation data** supplied by the **Microsoft Threat Intelligence service**. This service combines geolocation data from Microsoft solutions and third-party vendors and partners. The data is then available for analysis and investigation in the context of a security incident. For more information, see also [Enrich entities in Microsoft Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md).
### The timeline
The following types of items are included in the timeline:
- Bookmarks - any bookmarks that include the specific entity shown on the page. -- Activities - aggregation of notable events relating to the entity.
+- Anomalies - UEBA detections based on dynamic baselines created for each entity across various data inputs and against its own historical activities, those of its peers, and those of the organization as a whole.
+
+- Activities - aggregation of notable events relating to the entity. A wide range of activities are collected automatically, and you can now [customize this section by adding activities](customize-entity-activities.md) of your own choosing.
### Entity Insights
Entity pages are designed to be part of multiple usage scenarios, and can be acc
:::image type="content" source="./media/identify-threats-with-entity-behavior-analytics/entity-pages-use-cases.png" alt-text="Entity page use cases":::
+Entity page information is stored in the **BehaviorAnalytics** table, described in detail in the [Microsoft Sentinel UEBA enrichments reference](ueba-enrichments.md).
+ ## Next steps In this document, you learned about working with entities in Microsoft Sentinel. For practical guidance on implementation, and to use the insights you've gained, see the following articles:
sentinel Identify Threats With Entity Behavior Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/identify-threats-with-entity-behavior-analytics.md
Learn more about [entities in Microsoft Sentinel](entities.md) and see the full
When you encounter a user or host entity (IP address entities are in preview) in an entity search, an alert, or an investigation, you can select the entity and be taken to an **entity page**, a datasheet full of useful information about that entity. The types of information you will find on this page include basic facts about the entity, a timeline of notable events related to this entity and insights about the entity's behavior. Entity pages consist of three parts:+ - The left-side panel contains the entity's identifying information, collected from data sources like Azure Active Directory, Azure Monitor, Microsoft Defender for Cloud, CEF/Syslog, and Microsoft 365 Defender. -- The center panel shows a graphical and textual timeline of notable events related to the entity, such as alerts, bookmarks, and activities. Activities are aggregations of notable events from Log Analytics. The queries that detect those activities are developed by Microsoft security research teams, and you can now [add your own custom queries to detect activities](customize-entity-activities.md) of your choosing.
+- The center panel shows a graphical and textual timeline of notable events related to the entity, such as alerts, bookmarks, [anomalies](soc-ml-anomalies.md), and activities. Activities are aggregations of notable events from Log Analytics. The queries that detect those activities are developed by Microsoft security research teams, and you can now [add your own custom queries to detect activities](customize-entity-activities.md) of your choosing.
-- The right-side panel presents behavioral insights on the entity. These insights help to quickly identify anomalies and security threats. The insights are developed by Microsoft security research teams, and are based on anomaly detection models.
+- The right-side panel presents behavioral insights on the entity. These insights help to quickly identify [anomalies](soc-ml-anomalies.md) and security threats. The insights are developed by Microsoft security research teams, and are based on anomaly detection models.
> [!NOTE] > The **IP address entity page** (now in preview) contains **geolocation data** supplied by the **Microsoft Threat Intelligence service**. This service combines geolocation data from Microsoft solutions and third-party vendors and partners. The data is then available for analysis and investigation in the context of a security incident. For more information, see also [Enrich entities in Microsoft Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md).
The following types of items are included in the timeline:
- Bookmarks - any bookmarks that include the specific entity shown on the page.
+- Anomalies - UEBA detections based on dynamic baselines created for each entity across various data inputs and against its own historical activities, those of its peers, and those of the organization as a whole.
+ - Activities - aggregation of notable events relating to the entity. A wide range of activities are collected automatically, and you can now [customize this section by adding activities](customize-entity-activities.md) of your own choosing. ### Entity Insights
sentinel Near Real Time Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/near-real-time-rules.md
The following limitations currently govern the use of NRT rules:
1. No more than 20 rules can be defined per customer at this time.
+1. By design, NRT rules will only work properly on log sources with an **ingestion delay of less than 12 hours**.
+
+ (Since the NRT rule type is supposed to approximate **real-time** data ingestion, it doesn't afford you any advantage to use NRT rules on log sources with significant ingestion delay, even if it's far less than 12 hours.)
+ 1. As this type of rule is new, its syntax is currently limited but will gradually evolve. Therefore, at this time the following restrictions are in effect: 1. The query defined in an NRT rule can reference **only one table**. Queries can, however, refer to multiple watchlists and to threat intelligence feeds.
sentinel Collect Sap Hana Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/collect-sap-hana-audit-logs.md
Last updated 03/02/2022
This article explains how to collect audit logs from your SAP HANA database. > [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
If you have SAP HANA database audit logs configured with Syslog, you'll also need to configure your Log Analytics agent to collect the Syslog files.
If you have SAP HANA database audit logs configured with Syslog, you'll also nee
## Next steps
-Learn more about the Microsoft Sentinel SAP solutions:
+Learn more about the Microsoft Sentinel Threat Monitoring for SAP solutions:
-- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)-- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying Threat Monitoring for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md) - [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md) - [Deploy SAP security content](deploy-sap-security-content.md)-- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Deploy the Microsoft Sentinel Threat Monitoring for SAP data connector with SNC](configure-snc.md)
- [Enable and configure SAP auditing](configure-audit.md) Troubleshooting: -- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Troubleshoot your Microsoft Sentinel Threat Monitoring for SAP solution deployment](sap-deploy-troubleshoot.md)
- [Configure SAP Transport Management System](configure-transport.md) Reference files: -- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)-- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution: security content reference](sap-solution-security-content.md)
- [Kickstart script reference](reference-kickstart.md) - [Update script reference](reference-update.md) - [Systemconfig.ini file reference](reference-systemconfig.md)
sentinel Configure Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-audit.md
Title: Enable and configure SAP auditing for Microsoft Sentinel | Microsoft Docs
-description: This article shows you how to enable and configure auditing for the Microsoft Sentinel Continuous Threat Monitoring solution for SAP, so that you can have complete visibility into your SAP solution.
+description: This article shows you how to enable and configure auditing for the Microsoft Sentinel Threat Monitoring solution for SAP, so that you can have complete visibility into your SAP solution.
Last updated 04/27/2022
[!INCLUDE [Banner for top of topics](../includes/banner.md)]
-This article shows you how to enable and configure auditing for the Microsoft Sentinel Continuous Threat Monitoring solution for SAP, so that you can have complete visibility into your SAP solution.
+This article shows you how to enable and configure auditing for the Microsoft Sentinel Threat Monitoring solution for SAP, so that you can have complete visibility into your SAP solution.
> [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
> > We strongly recommend that any management of your SAP system is carried out by an experienced SAP system administrator. > > The steps in this article may vary, depending on your SAP sytem's version, and should be considered as a sample only.
-Some installations of SAP systems may not have audit log enabled by default. For best results in evaluating the performance and efficacy of the Microsoft Sentinel Continuous Threat Monitoring solution for SAP, enable auditing of your SAP system and configure the audit parameters.
+Some installations of SAP systems may not have audit log enabled by default. For best results in evaluating the performance and efficacy of the Microsoft Sentinel Threat Monitoring solution for SAP, enable auditing of your SAP system and configure the audit parameters.
## Check if auditing is enabled
Some installations of SAP systems may not have audit log enabled by default. For
### Recommended audit categories
-The following table lists Message IDs used by the Continuous Threat Monitoring for SAP solution. In order for analytics rules to detect events properly, we strongly recommend configuring an audit policy that includes the message IDs listed below as a minimum.
+The following table lists Message IDs used by the Threat Monitoring for SAP solution. In order for analytics rules to detect events properly, we strongly recommend configuring an audit policy that includes the message IDs listed below as a minimum.
| Message ID | Message text | Category name | Event Weighting | Class Used in Rules | | - | - | - | - | - |
The following table lists Message IDs used by the Continuous Threat Monitoring f
## Next steps
-Learn more about the Microsoft Sentinel SAP solutions:
+Learn more about the Microsoft Sentinel Threat Monitoring for SAP solutions:
-- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)-- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying Threat Monitoring for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md) - [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md) - [Deploy SAP security content](deploy-sap-security-content.md)-- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Deploy the Microsoft Sentinel Threat Monitoring for SAP data connector with SNC](configure-snc.md)
- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md) Troubleshooting: -- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Troubleshoot your Microsoft Sentinel Threat Monitoring for SAP solution deployment](sap-deploy-troubleshoot.md)
- [Configure SAP Transport Management System](configure-transport.md) Reference files: -- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)-- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution: security content reference](sap-solution-security-content.md)
- [Kickstart script reference](reference-kickstart.md) - [Update script reference](reference-update.md) - [Systemconfig.ini file reference](reference-systemconfig.md)
sentinel Configure Snc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-snc.md
Title: Deploy the Microsoft Sentinel SAP data connector with Secure Network Communications (SNC) | Microsoft Docs
+ Title: Deploy the Microsoft Sentinel Threat Monitoring for SAP data connector with Secure Network Communications (SNC) | Microsoft Docs
description: This article shows you how to deploy the **Microsoft Sentinel data connector for SAP** to ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications.
Last updated 05/03/2022
-# Deploy the Microsoft Sentinel SAP data connector with SNC
+# Deploy the Microsoft Sentinel Threat Monitoring for SAP data connector with SNC
[!INCLUDE [Banner for top of topics](../includes/banner.md)] This article shows you how to deploy the **Microsoft Sentinel data connector for SAP** to ingest NetWeaver/ABAP logs over a secure connection using Secure Network Communications (SNC). > [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-The Continuous Threat Monitoring for SAP data connector agent typically connects to an SAP ABAP server using an RFC connection, and a user's username and password for authentication.
+The Threat Monitoring for SAP data connector agent typically connects to an SAP ABAP server using an RFC connection, and a user's username and password for authentication.
However, some environments may require the connection be over an encrypted channel, and client certificates be used for authentication. In these cases you can use SAP Secure Network Communication for this purpose, and you'll have to take the appropriate steps as outlined in this article.
For additional information on options available in the kickstart script, review
## Next steps
-Learn more about the Microsoft Sentinel SAP solutions:
+Learn more about the Microsoft Sentinel Threat Monitoring for SAP solutions:
-- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)-- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying Threat Monitoring for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md) - [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md) - [Deploy SAP security content](deploy-sap-security-content.md)-- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Deploy the Microsoft Sentinel Threat Monitoring for SAP data connector with SNC](configure-snc.md)
- [Enable and configure SAP auditing](configure-audit.md) - [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md) Troubleshooting: -- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Troubleshoot your Microsoft Sentinel Threat Monitoring for SAP solution deployment](sap-deploy-troubleshoot.md)
- [Configure SAP Transport Management System](configure-transport.md) Reference files: -- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)-- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution: security content reference](sap-solution-security-content.md)
- [Kickstart script reference](reference-kickstart.md) - [Update script reference](reference-update.md) - [Systemconfig.ini file reference](reference-systemconfig.md)
sentinel Configure Transport https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-transport.md
Title: Configure SAP Transport Management System to connect from Microsoft Sentinel | Microsoft Docs
-description: This article shows you how to configure the SAP Transport Management System in the event of an error or in a lab environment where it hasn't already been configured, in order to successfully deploy the Continuous Threat Monitoring solution for SAP in Microsoft Sentinel.
+description: This article shows you how to configure the SAP Transport Management System in the event of an error or in a lab environment where it hasn't already been configured, in order to successfully deploy the Threat Monitoring solution for SAP in Microsoft Sentinel.
Last updated 04/07/2022
[!INCLUDE [Banner for top of topics](../includes/banner.md)]
-This article shows you how to configure the SAP Transport Management System in order to successfully deploy the Continuous Threat Monitoring solution for SAP in Microsoft Sentinel.
+This article shows you how to configure the SAP Transport Management System in order to successfully deploy the Threat Monitoring solution for SAP in Microsoft Sentinel.
> [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
SAP's Transport Management System is normally already configured on production systems. However, in a lab environment, where CRs often haven't been previously installed, configuration may be required.
The following steps show the process for configuring the Transport Management Sy
## Next steps
-Now that you've configured the Transport Management System, you'll be able to successfully complete the `STMS_IMPORT` transaction and you can continue [preparing your SAP environment](preparing-sap.md) for deploying the Continuous Threat Monitoring solution for SAP in Microsoft Sentinel.
+Now that you've configured the Transport Management System, you'll be able to successfully complete the `STMS_IMPORT` transaction and you can continue [preparing your SAP environment](preparing-sap.md) for deploying the Threat Monitoring solution for SAP in Microsoft Sentinel.
> [!div class="nextstepaction"] > [Deploy SAP Change Requests and configure authorization](preparing-sap.md#import-the-crs)
-Learn more about the Microsoft Sentinel SAP solutions:
+Learn more about the Microsoft Sentinel Threat Monitoring for SAP solutions:
-- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)-- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying Threat Monitoring for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md) - [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md) - [Deploy SAP security content](deploy-sap-security-content.md)-- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Deploy the Microsoft Sentinel Threat Monitoring for SAP data connector with SNC](configure-snc.md)
- [Enable and configure SAP auditing](configure-audit.md) - [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md) Troubleshooting: -- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Troubleshoot your Microsoft Sentinel Threat Monitoring for SAP solution deployment](sap-deploy-troubleshoot.md)
Reference files: -- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)-- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution: security content reference](sap-solution-security-content.md)
- [Kickstart script reference](reference-kickstart.md) - [Update script reference](reference-update.md) - [Systemconfig.ini file reference](reference-systemconfig.md)
sentinel Deploy Data Connector Agent Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container.md
Title: Deploy and configure the Microsoft Sentinel SAP data connector agent container | Microsoft Docs
-description: This article shows you how to deploy the SAP data connector agent container in order to ingest SAP data into Microsoft Sentinel, as part of Microsoft Sentinel's Continuous Threat Monitoring solution for SAP.
+ Title: Deploy and configure the Microsoft Sentinel Threat Monitoring for SAP data connector agent container | Microsoft Docs
+description: This article shows you how to deploy the SAP data connector agent container in order to ingest SAP data into Microsoft Sentinel, as part of Microsoft Sentinel's Threat Monitoring solution for SAP.
Last updated 04/12/2022
-# Deploy and configure the Microsoft Sentinel SAP data connector agent container
+# Deploy and configure the Microsoft Sentinel Threat Monitoring for SAP data connector agent container
[!INCLUDE [Banner for top of topics](../includes/banner.md)]
-This article shows you how to deploy the SAP data connector agent container in order to ingest SAP data into Microsoft Sentinel, as part of Microsoft Sentinel's Continuous Threat Monitoring solution for SAP.
+This article shows you how to deploy the SAP data connector agent container in order to ingest SAP data into Microsoft Sentinel, as part of Microsoft Sentinel's Threat Monitoring solution for SAP.
> [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Deployment milestones
-Deployment of the SAP continuous threat monitoring solution is divided into the following sections
+Deployment of the Threat Monitoring for SAP solution is divided into the following sections
1. [Deployment overview](deployment-overview.md)
Deployment of the SAP continuous threat monitoring solution is divided into the
1. [Deploy SAP security content](deploy-sap-security-content.md)
+1. [Configure Threat Monitoring for SAP solution](deployment-solution-configuration.md)
+ 1. Optional deployment steps - [Configure auditing](configure-audit.md) - [Configure SAP data connector to use SNC](configure-snc.md)
Deployment of the SAP continuous threat monitoring solution is divided into the
## Data connector agent deployment overview
-For the Continuous Threat Monitoring solution for SAP to operate correctly, you must first get your SAP data into Microsoft Sentinel. To accomplish this, you need to deploy the solution's SAP data connector agent.
+For the Threat Monitoring solution for SAP to operate correctly, you must first get your SAP data into Microsoft Sentinel. To accomplish this, you need to deploy the solution's SAP data connector agent.
The data connector agent runs as a container on a Linux virtual machine (VM). This VM can be hosted either in Azure, in a third-party cloud, or on-premises. We recommend that you install and configure this container using a *kickstart* script; however, you can choose to [deploy the container manually](?tabs=deploy-manually#deploy-the-data-connector-agent-container).
If you're not using SNC, then your SAP configuration and authentication secrets
## Next steps
-Once connector is deployed, proceed to deploy Continuous Threat Monitoring for SAP solution content
+Once connector is deployed, proceed to deploy Threat Monitoring for SAP solution content
> [!div class="nextstepaction"] > [Deploy SAP security content](deploy-sap-security-content.md)
sentinel Deploy Sap Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-sap-security-content.md
Title: Deploy SAP security content in Microsoft Sentinel | Microsoft Docs
-description: This article shows you how to deploy Microsoft Sentinel security content into your Microsoft Sentinel workspace. This content makes up the remaining parts of the Continuous Threat Monitoring solution for SAP.
+description: This article shows you how to deploy Microsoft Sentinel security content into your Microsoft Sentinel workspace. This content makes up the remaining parts of the Threat Monitoring solution for SAP.
Last updated 04/27/2022
[!INCLUDE [Banner for top of topics](../includes/banner.md)]
-This article shows you how to deploy Microsoft Sentinel security content into your Microsoft Sentinel workspace. This content makes up the remaining parts of the Continuous Threat Monitoring solution for SAP.
+This article shows you how to deploy Microsoft Sentinel security content into your Microsoft Sentinel workspace. This content makes up the remaining parts of the Threat Monitoring solution for SAP.
> [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Deployment milestones
Track your SAP solution deployment journey through this series of articles:
1. **Deploy SAP security content (*You are here*)**
+1. [Configure Threat Monitoring for SAP solution](deployment-solution-configuration.md)
+ 1. Optional deployment steps - [Configure auditing](configure-audit.md) - [Configure SAP data connector to use SNC](configure-snc.md)
Track your SAP solution deployment journey through this series of articles:
Deploy the [SAP security content](sap-solution-security-content.md) from the Microsoft Sentinel **Content hub** and **Watchlists** areas.
-Deploying the **Microsoft Sentinel - Continuous Threat Monitoring for SAP** solution causes the SAP data connector to be displayed in the Microsoft Sentinel **Data connectors** area. The solution also deploys the **SAP - System Applications and Products** workbook and SAP-related analytics rules.
+Deploying the **Microsoft Sentinel - Threat Monitoring for SAP** solution causes the SAP data connector to be displayed in the Microsoft Sentinel **Data connectors** area. The solution also deploys the **SAP - System Applications and Products** workbook and SAP-related analytics rules.
To deploy SAP solution security content, do the following:
To deploy SAP solution security content, do the following:
The **Content hub (Preview)** page displays a filtered, searchable list of solutions.
-1. To open the SAP solution page, select **Continuous Threat Monitoring for SAP**.
+1. To open the SAP solution page, select **Threat Monitoring for SAP**.
- :::image type="content" source="./media/deploy-sap-security-content/sap-solution.png" alt-text="Screenshot of the 'Microsoft Sentinel - Continuous Threat Monitoring for SAP' solution pane." lightbox="media/deploy-sap-security-content/sap-solution.png":::
+ :::image type="content" source="./media/deploy-sap-security-content/sap-solution.png" alt-text="Screenshot of the 'Microsoft Sentinel - Threat Monitoring for SAP' solution pane." lightbox="media/deploy-sap-security-content/sap-solution.png":::
1. To launch the solution deployment wizard, select **Create**, and then enter the details of the Azure subscription, resource group, and Log Analytics workspace (the one used by Microsoft Sentinel) where you want to deploy the solution. 1. Select **Next** to cycle through the **Data Connectors**, **Analytics**, and **Workbooks** tabs, where you can learn about the components that will be deployed with this solution.
- For more information, see [Microsoft Sentinel SAP solution: security content reference (public preview)](sap-solution-security-content.md).
+ For more information, see [Microsoft Sentinel Threat Monitoring for SAP solution: security content reference (public preview)](sap-solution-security-content.md).
1. On the **Review + create tab** pane, wait for the **Validation Passed** message, then select **Create** to deploy the solution.
To deploy SAP solution security content, do the following:
- **Threat Management** > **Workbooks** > **My workbooks**, to find the [built-in SAP workbooks](sap-solution-security-content.md#built-in-workbooks). - **Configuration** > **Analytics** to find a series of [SAP-related analytics rules](sap-solution-security-content.md#built-in-analytics-rules).
-1. In Microsoft Sentinel, go to the **Microsoft Sentinel Continuous Threat Monitoring for SAP** data connector to confirm the connection:
+1. In Microsoft Sentinel, go to the **Microsoft Sentinel Threat Monitoring for SAP** data connector to confirm the connection:
- [![Screenshot of the Microsoft Sentinel Continuous Threat Monitoring for SAP data connector page.](./media/deploy-sap-security-content/sap-data-connector.png)](./media/deploy-sap-security-content/sap-data-connector.png#lightbox)
+ [![Screenshot of the Microsoft Sentinel Threat Monitoring for SAP data connector page.](./media/deploy-sap-security-content/sap-data-connector.png)](./media/deploy-sap-security-content/sap-data-connector.png#lightbox)
SAP ABAP logs are displayed on the Microsoft Sentinel **Logs** page, under **Custom logs**: [![Screenshot of the SAP ABAP logs in the 'Custom Logs' area in Microsoft Sentinel.](./media/deploy-sap-security-content/sap-logs-in-sentinel.png)](./media/deploy-sap-security-content/sap-logs-in-sentinel.png#lightbox)
- For more information, see [Microsoft Sentinel SAP solution logs reference](sap-solution-log-reference.md).
+ For more information, see [Microsoft Sentinel Threat Monitoring for SAP solution logs reference](sap-solution-log-reference.md).
## Next steps
-Learn more about the Microsoft Sentinel SAP solutions:
+Learn more about the Microsoft Sentinel Threat Monitoring for SAP solutions:
-- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)-- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying Threat Monitoring for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md) - [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md) - [Deploy SAP security content](deploy-sap-security-content.md)-- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Deploy the Microsoft Sentinel Threat Monitoring for SAP data connector with SNC](configure-snc.md)
- [Enable and configure SAP auditing](configure-audit.md) - [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md) Troubleshooting: -- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Troubleshoot your Microsoft Sentinel Threat Monitoring for SAP solution deployment](sap-deploy-troubleshoot.md)
- [Configure SAP Transport Management System](configure-transport.md) Reference files: -- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)-- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution: security content reference](sap-solution-security-content.md)
- [Update script reference](reference-update.md) - [Systemconfig.ini file reference](reference-systemconfig.md)
sentinel Deployment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-overview.md
Title: Deploy Continuous Threat Monitoring for SAP in Microsoft Sentinel | Microsoft Docs
-description: This article introduces you to the process of deploying the Microsoft Sentinel Continuous Threat Monitoring solution for SAP.
+ Title: Deploy Threat Monitoring for SAP in Microsoft Sentinel | Microsoft Docs
+description: This article introduces you to the process of deploying the Microsoft Sentinel Threat Monitoring solution for SAP.
Last updated 04/12/2022
-# Deploy Continuous Threat Monitoring for SAP in Microsoft Sentinel
+# Deploy Threat Monitoring for SAP in Microsoft Sentinel
[!INCLUDE [Banner for top of topics](../includes/banner.md)]
-This article introduces you to the process of deploying the Microsoft Sentinel Continuous Threat Monitoring solution for SAP. The full process is detailed in a whole set of articles linked under [Deployment milestones](#deployment-milestones) below.
+This article introduces you to the process of deploying the Microsoft Sentinel Threat Monitoring solution for SAP. The full process is detailed in a whole set of articles linked under [Deployment milestones](#deployment-milestones) below.
> [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Overview
-**Continuous Threat Monitoring for SAP** is a [Microsoft Sentinel solution](../sentinel-solutions.md) that you can use to monitor your SAP systems and detect sophisticated threats throughout the business logic and application layers. The solution includes the following components:
+**Threat Monitoring for SAP** is a [Microsoft Sentinel solution](../sentinel-solutions.md) that you can use to monitor your SAP systems and detect sophisticated threats throughout the business logic and application layers. The solution includes the following components:
- The SAP data connector for data ingestion. - Analytics rules and watchlists for threat detection. - Workbooks for interactive data visualization.
-The SAP data connector is an agent, installed on a VM or a physical server, that collects application logs from across the entire SAP system landscape. It then sends those logs to your Log Analytics workspace in Microsoft Sentinel. You can then use the other content in the SAP Continuous Threat Monitoring solution ΓÇô the analytics rules, workbooks, and watchlists ΓÇô to gain insight into your organization's SAP environment and to detect and respond to security threats.
+The SAP data connector is an agent, installed on a VM or a physical server, that collects application logs from across the entire SAP system landscape. It then sends those logs to your Log Analytics workspace in Microsoft Sentinel. You can then use the other content in the Threat Monitoring for SAP solution ΓÇô the analytics rules, workbooks, and watchlists ΓÇô to gain insight into your organization's SAP environment and to detect and respond to security threats.
## Deployment milestones
Follow your deployment journey through this series of articles, in which you'll
| Milestone | Article | | | - | | **1. Deployment overview** | **YOU ARE HERE** |
-| **2. Deployment prerequisites** | [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md) |
+| **2. Deployment prerequisites** | [Prerequisites for deploying Threat Monitoring for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md) |
| **3. Prepare SAP environment** | [Deploying SAP CRs and configuring authorization](preparing-sap.md) | | **4. Deploy data connector agent** | [Deploy and configure the data connector agent container](deploy-data-connector-agent-container.md) | | **5. Deploy SAP security content** | [Deploy SAP security content](deploy-sap-security-content.md)
-| **6. Optional steps** | - [Configure auditing](configure-audit.md)<br>- [Configure SAP data connector to use SNC](configure-snc.md)
+| **6. Configure Threat Monitoring for SAP solution** | [Configure Threat Monitoring for SAP solution](deployment-solution-configuration.md)
+| **7. Optional steps** | - [Configure auditing](configure-audit.md)<br>- [Configure SAP data connector to use SNC](configure-snc.md)
## Next steps
-Begin the deployment of SAP continuous threat monitoring solution by reviewing the Prerequisites
+Begin the deployment of Threat Monitoring for SAP solution by reviewing the Prerequisites
> [!div class="nextstepaction"] > [Prerequisites](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
sentinel Deployment Solution Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-solution-configuration.md
+
+ Title: Configure Microsoft Sentinel Threat Monitoring for SAP solution
+description: This article shows you how to configure the deployed Threat Monitoring for SAP
+++ Last updated : 04/27/2022++
+# Configure Threat Monitoring for SAP solution
++
+This article provides best practices for configuring the Microsoft Sentinel Threat Monitoring solution for SAP. The full deployment process is detailed in a whole set of articles linked under [Deployment milestones](deployment-overview.md#deployment-milestones).
+
+> [!IMPORTANT]
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Deployment of the data collector agent and solution in Microsoft Sentinel provides you with the ability to monitor SAP systems for suspicious activities and identify threats. However, for best results, best practices for operating the solution strongly recommend carrying out several additional configuration steps that are very dependent on the SAP deployment.
+
+## Deployment milestones
+
+Track your SAP solution deployment journey through this series of articles:
+
+1. [Deployment overview](deployment-overview.md)
+
+1. [Deployment prerequisites](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+
+1. [Prepare SAP environment](preparing-sap.md)
+
+1. [Deploy data connector agent](deploy-data-connector-agent-container.md)
+
+1. [Deploy SAP security content](deploy-sap-security-content.md)
+
+1. **Configure Threat Monitoring for SAP solution (*You are here*)**
+
+1. Optional deployment steps
+ - [Configure auditing](configure-audit.md)
+ - [Configure SAP data connector to use SNC](configure-snc.md)
+
+## Configure watchlists
+
+Threat Monitoring for SAP solution configuration is accomplished by providing customer-specific information in the provisioned watchlists.
+
+> [!NOTE]
+>
+> After initial solution deployment, it may take some time before watchlists are populated with data.
+> If you edit a watchlist and find it is empty, please wait a few minutes and retry opening the watchlist for editing.
+
+### SAP - Systems watchlist
+SAP - Systems watchlist defines which SAP Systems are present in the monitored environment. For every system, specify its SID, whether it is a production system or a dev/test environment, as well as a description.
+This information is used by some analytics rules, which may react differently if relevant events appear in a Development or a Production system.
+
+### SAP - Networks watchlist
+SAP - Networks watchlist outlines all networks used by the organization. It is primarily used to identify whether or not user logons are originating from within known segments of the network, also if user logon origin changes unexpectedly.
+
+There are a number of approaches for documenting network topology. You could define a broad range of addresses, like 172.16.0.0/16, and name it "Corporate Network", which will be good enough for tracking logons from outside that range. A more segmented approach, however, allows you better visibility into potentially atypical activity.
+
+For example: define the following two segments and their geographical locations:
+
+| Segment | Location |
+| - | - |
+| 192.168.10.0/23 | Western Europe |
+| 10.15.0.0/16 | Australia |
+
+Now Microsoft Sentinel will be able to differentiate a logon from 192.168.10.15 (in the first segment) from a logon from 10.15.2.1 (in the second segment) and alert you if such behavior is identified as atypical.
+
+### Sensitive data watchlists
+
+- SAP - Sensitive Function Modules
+- SAP - Sensitive Tables
+- SAP - Sensitive ABAP Programs
+- SAP - Sensitive Transactions
+- SAP - Critical Authorizations
+
+All of these watchlists identify sensitive actions or data that can be carried out or accessed by users. Several well-known operations, tables and authorizations have been pre-configured in the watchlists, however we recommend you consult with the SAP BASIS team to identify which operations, transactions, authorizations and tables are considered to be sensitive in your SAP environment.
+
+### User master data watchlists
+
+- SAP - Sensitive Profiles
+- SAP - Sensitive Roles
+- SAP - Privileged Users
+
+Threat Monitoring for SAP solution uses User Master data gathered from SAP systems to identify which users, profiles, and roles should be considered sensitive. Some sample data is included in the watchlists, though we recommend you consult with the SAP BASIS team to identify sensitive users, roles and profiles and populate the watchlists accordingly.
+
+## Start enabling analytics rules
+By default, all analytics rules provided in the Threat Monitoring for SAP solution are disabled. When you install the solution, it's best if you don't enable all the rules at once so you don't end up with a lot of noise. Instead, use a staged approach, enabling rules over time, ensuring you are not receiving noise or false positives. Ensure alerts are operationalized, that is, have a response plan for each of the alerts. We consider the following rules to be easiest to implement, so best to start with them:
+
+1. Deactivation of Security Audit Log
+1. Client Configuration Change
+1. Change in Sensitive Privileged User
+1. Client configuration change
+1. Sensitive privileged user logon
+1. Sensitive privileged user makes a change in other
+1. Sensitive privilege user password change and login
+1. System configuration change
+1. Brute force (RFC)
+1. Function module tested
+
sentinel Preparing Sap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/preparing-sap.md
Last updated 04/07/2022
This article shows you how to deploy the SAP Change Requests (CRs) necessary to prepare the environment for the installation of the SAP agent, so that it can properly connect to your SAP systems. > [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Deployment milestones
Track your SAP solution deployment journey through this series of articles:
1. [Deploy SAP security content](deploy-sap-security-content.md)
+1. [Configure Threat Monitoring for SAP solution](deployment-solution-configuration.md)
+ 1. Optional deployment steps - [Configure auditing](configure-audit.md) - [Configure SAP data connector to use SNC](configure-snc.md)
Track your SAP solution deployment journey through this series of articles:
> - **IP address:** `192.168.136.4` > - **Administrator user:** `a4hadm`, however, the SSH connection to the SAP system is established with `root` user credentials.
-The deployment of Microsoft Sentinel's Continuous Threat Monitoring for SAP solution requires the installation of several CRs. More details about the required CRs can be found in the [SAP environment validation steps](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#sap-environment-validation-steps) section of this guide.
+The deployment of Microsoft Sentinel's Threat Monitoring for SAP solution requires the installation of several CRs. More details about the required CRs can be found in the [SAP environment validation steps](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#sap-environment-validation-steps) section of this guide.
To deploy the CRs, follow the steps outlined below:
The next step is to generate an active role profile for Microsoft Sentinel to us
### Create a user
-Microsoft Sentinel's Continuous Threat Monitoring solution for SAP requires a user account to connect to your SAP system. Use the following instructions to create a user account and assign it to the role that you created in the previous step.
+Microsoft Sentinel's Threat Monitoring solution for SAP requires a user account to connect to your SAP system. Use the following instructions to create a user account and assign it to the role that you created in the previous step.
In the examples shown here, we will use the role name **/MSFTSEN/SENTINEL_CONNECTOR**.
sentinel Prerequisites For Deploying Sap Continuous Threat Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/prerequisites-for-deploying-sap-continuous-threat-monitoring.md
Title: Prerequisites for deploying SAP continuous threat monitoring in Microsoft Sentinel | Microsoft Docs
-description: This article lists the prerequisites required for deployment of the Microsoft Sentinel Continuous Threat Monitoring solution for SAP.
+ Title: Prerequisites for deploying Threat Monitoring for SAP in Microsoft Sentinel | Microsoft Docs
+description: This article lists the prerequisites required for deployment of the Microsoft Sentinel Threat Monitoring solution for SAP.
Last updated 04/07/2022
-# Prerequisites for deploying SAP continuous threat monitoring in Microsoft Sentinel
+# Prerequisites for deploying Threat Monitoring for SAP in Microsoft Sentinel
[!INCLUDE [Banner for top of topics](../includes/banner.md)]
-This article lists the prerequisites required for deployment of the Microsoft Sentinel Continuous Threat Monitoring solution for SAP.
+This article lists the prerequisites required for deployment of the Microsoft Sentinel Threat Monitoring solution for SAP.
> [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Deployment milestones
Track your SAP solution deployment journey through this series of articles:
1. [Deploy SAP security content](deploy-sap-security-content.md)
+1. [Configure Threat Monitoring for SAP solution](deployment-solution-configuration.md)
+ 1. Optional deployment steps - [Configure auditing](configure-audit.md) - [Configure SAP data connector to use SNC](configure-snc.md) ## Table of prerequisites
-To successfully deploy the SAP Continuous Threat Monitoring solution, you must meet the following prerequisites:
+To successfully deploy the Threat Monitoring for SAP solution, you must meet the following prerequisites:
### Azure prerequisites
To successfully deploy the SAP Continuous Threat Monitoring solution, you must m
| Prerequisite | Description | | - | -- | | **Supported SAP versions** | The SAP data connector agent works best with [SAP_BASIS versions 750 SP13](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows) or later. <br><br>Certain steps in this tutorial provide alternative instructions if you're working on the older [SAP_BASIS version 740](https://support.sap.com/en/my-support/software-downloads/support-package-stacks/product-versions.html#:~:text=SAP%20NetWeaver%20%20%20%20SAP%20Product%20Version,%20%20SAPKB710%3Cxx%3E%20%207%20more%20rows). |
-| **Required software** | SAP NetWeaver RFC SDK 7.50 ([Download here](https://aka.ms/sap-sdk-download)).<br>At the link, select **SAP NW RFC SDK 7.50** -> **Linux on X86_64 64BIT** -> **Download the latest version**.<br><br>Make sure that you also have an SAP user account in order to access the SAP software download page. |
+| **Required software** | SAP NetWeaver RFC SDK 7.50 ([Download here](https://aka.ms/sentinel4sapsdk))<br>Make sure that you also have an SAP user account in order to access the SAP software download page. |
| **SAP system details** | Make a note of the following SAP system details for use in this tutorial:<br>- SAP system IP address and FQDN hostname<br>- SAP system number, such as `00`<br>- SAP System ID, from the SAP NetWeaver system (for example, `NPL`) <br>- SAP client ID, such as `001` | | **SAP NetWeaver instance access** | The SAP data connector agent uses one of the following mechanisms to authenticate to the SAP system: <br>- SAP ABAP user/password<br>- A user with an X.509 certificate (This option requires additional configuration steps) |
sentinel Reference Kickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-kickstart.md
Title: Microsoft Sentinel Continuous Threat Monitoring for SAP container kickstart deployment script reference | Microsoft Docs
+ Title: Microsoft Sentinel Threat Monitoring for SAP container kickstart deployment script reference | Microsoft Docs
description: Description of command line options available with kickstart deployment script
Last updated 03/02/2022
[!INCLUDE [Banner for top of topics](../includes/banner.md)] > [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Script overview
-Simplify the [deployment of the SAP data connector agent container](deploy-data-connector-agent-container.md) by using the provided **Kickstart script** (available at [Microsoft Azure Sentinel SAP Continuous Threat Monitoring GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP)), which can also enable different modes of secrets storage, configure SNC, and more.
+Simplify the [deployment of the SAP data connector agent container](deploy-data-connector-agent-container.md) by using the provided **Kickstart script** (available at [Microsoft Azure Sentinel Threat Monitoring for SAP GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP)), which can also enable different modes of secrets storage, configure SNC, and more.
## Parameter reference
If set to `cfgf`, configuration file stored locally will be used to store secret
**Required:** No, script will attempt to locate nwrfc*.zip file in the current folder, if not found, user will be prompted to supply a valid NetWeaver SDK archive file.
-**Explanation:** NetWeaver SDK file path. A valid SDK is required for the data collector to operate. For more information see [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#table-of-prerequisites).
+**Explanation:** NetWeaver SDK file path. A valid SDK is required for the data collector to operate. For more information see [Prerequisites for deploying Threat Monitoring for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md#table-of-prerequisites).
#### Enterprise Application ID
If set to `cfgf`, configuration file stored locally will be used to store secret
## Next steps
-Learn more about the Microsoft Sentinel SAP solutions:
+Learn more about the Microsoft Sentinel Threat Monitoring for SAP solutions:
-- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)-- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying Threat Monitoring for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md) - [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md) - [Deploy SAP security content](deploy-sap-security-content.md)-- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Deploy the Microsoft Sentinel Threat Monitoring for SAP data connector with SNC](configure-snc.md)
- [Enable and configure SAP auditing](configure-audit.md) - [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md) Troubleshooting: -- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Troubleshoot your Microsoft Sentinel Threat Monitoring for SAP solution deployment](sap-deploy-troubleshoot.md)
- [Configure SAP Transport Management System](configure-transport.md) Reference files: -- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)-- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution: security content reference](sap-solution-security-content.md)
- [Update script reference](reference-update.md) - [Systemconfig.ini file reference](reference-systemconfig.md)
sentinel Reference Systemconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-systemconfig.md
Title: Microsoft Sentinel Continuous Threat Monitoring for SAP container configuration file reference | Microsoft Docs
+ Title: Microsoft Sentinel Threat Monitoring for SAP container configuration file reference | Microsoft Docs
description: Description of settings available in systemconfig.ini file
Last updated 03/03/2022
[!INCLUDE [Banner for top of topics](../includes/banner.md)] > [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
The *systemconfig.ini* file is used to configure behavior of the data collector. Configuration options are grouped into several sections. This article lists options available and provides an explanation to the options.
The *systemconfig.ini* file is used to configure behavior of the data collector.
## Secrets Source section ```systemconfig.ini
-secrets=AZURE_KEY_VAULT|DOCKER_SECRETS|DOCKER_FIXED
+secrets=AZURE_KEY_VAULT|DOCKER_FIXED
+# Storage location of SAP credentials and Log Analytics workspace ID and key
+# AZURE_KEY_VAULT - store in an Azure Key Vault. Requires keyvault option and intprefix option
+# DOCKER_FIXED - store in systemconfig.ini file. Requires user, passwd, loganalyticswsid and publickey options
keyvault=<vaultname> # Azure Keyvault name, in case secrets = AZURE_KEY_VAULT
USRSTAMP_INCREMENTAL = <True/False>
``` ## Next steps
-Learn more about the Microsoft Sentinel SAP solutions:
+Learn more about the Microsoft Sentinel Threat Monitoring for SAP solutions:
-- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)-- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying Threat Monitoring for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md) - [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md) - [Deploy SAP security content](deploy-sap-security-content.md)-- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Deploy the Microsoft Sentinel Threat Monitoring for SAP data connector with SNC](configure-snc.md)
- [Enable and configure SAP auditing](configure-audit.md) - [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md) Troubleshooting: -- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Troubleshoot your Microsoft Sentinel Threat Monitoring for SAP solution deployment](sap-deploy-troubleshoot.md)
- [Configure SAP Transport Management System](configure-transport.md) Reference files: -- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)-- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution: security content reference](sap-solution-security-content.md)
- [Kickstart script reference](reference-kickstart.md) - [Update script reference](reference-update.md)
sentinel Reference Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-update.md
Title: Microsoft Sentinel Continuous Threat Monitoring for SAP container update script reference | Microsoft Docs
+ Title: Microsoft Sentinel Threat Monitoring for SAP container update script reference | Microsoft Docs
description: Description of command line options available with update deployment script
Last updated 03/02/2022
[!INCLUDE [Banner for top of topics](../includes/banner.md)] > [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-The SAP data collector agent container uses an update script (available at [Microsoft Azure Sentinel SAP Continuous Threat Monitoring GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP)) to simplify the update process.
+The SAP data collector agent container uses an update script (available at [Microsoft Azure Sentinel Threat Monitoring for SAP GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP)) to simplify the update process.
This article shows how the script's behavior can be customized by configuring its parameters.
During the update process, the script identifies any containers running the SAP
**Required:** No
-**Explanation:** By default, the update script updates all containers running Continuous Threat Monitoring for SAP. To update a single, or multiple containers, specify `--containername <containername>` switch. Switch can be specified multiple times, e.e. `--containername sapcon-A4H --containername sapcon-QQ1 --containername sapcon-QAT`. In such case, only specified containers will be updated. If container name specified does not exist, it will be skipped by the script.
+**Explanation:** By default, the update script updates all containers running Threat Monitoring for SAP. To update a single, or multiple containers, specify `--containername <containername>` switch. Switch can be specified multiple times, e.e. `--containername sapcon-A4H --containername sapcon-QQ1 --containername sapcon-QAT`. In such case, only specified containers will be updated. If container name specified does not exist, it will be skipped by the script.
## Next steps
-Learn more about the Microsoft Sentinel SAP solutions:
+Learn more about the Microsoft Sentinel Threat Monitoring for SAP solutions:
-- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)-- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying Threat Monitoring for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md) - [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md) - [Deploy SAP security content](deploy-sap-security-content.md)-- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Deploy the Microsoft Sentinel Threat Monitoring for SAP data connector with SNC](configure-snc.md)
- [Enable and configure SAP auditing](configure-audit.md) - [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md) Troubleshooting: -- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Troubleshoot your Microsoft Sentinel Threat Monitoring for SAP solution deployment](sap-deploy-troubleshoot.md)
- [Configure SAP Transport Management System](configure-transport.md) Reference files: -- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)-- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution: security content reference](sap-solution-security-content.md)
- [Kickstart script reference](reference-kickstart.md) - [Systemconfig.ini file reference](reference-systemconfig.md)
sentinel Sap Deploy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-deploy-troubleshoot.md
Title: Microsoft Sentinel SAP solution deployment troubleshooting | Microsoft Docs
-description: Learn how to troubleshoot specific issues that may occur in your Microsoft Sentinel SAP solution deployment.
+ Title: Microsoft Sentinel Threat Monitoring for SAP solution deployment troubleshooting | Microsoft Docs
+description: Learn how to troubleshoot specific issues that may occur in your Microsoft Sentinel Threat Monitoring for SAP solution deployment.
Last updated 11/09/2021
-# Troubleshooting your Microsoft Sentinel SAP solution deployment
+# Troubleshooting your Microsoft Sentinel Threat Monitoring for SAP solution deployment
[!INCLUDE [Banner for top of topics](../includes/banner.md)] > [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
## Useful Docker commands
docker restart sapcon-[SID]
## View all Docker execution logs
-To view all Docker execution logs for your Microsoft Sentinel SAP data connector deployment, run one of the following commands:
+To view all Docker execution logs for your Microsoft Sentinel Threat Monitoring for SAP data connector deployment, run one of the following commands:
```bash docker exec -it sapcon-[SID] bash && cd /sapcon-app/sapcon/logs
To check for misconfigurations, run the **RSDBTIME** report in transaction **SE3
## Next steps
-Learn more about the Microsoft Sentinel SAP solutions:
+Learn more about the Microsoft Sentinel Threat Monitoring for SAP solutions:
-- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)-- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying Threat Monitoring for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md) - [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md) - [Deploy SAP security content](deploy-sap-security-content.md)-- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Deploy the Microsoft Sentinel Threat Monitoring for SAP data connector with SNC](configure-snc.md)
- [Enable and configure SAP auditing](configure-audit.md) - [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md)
Troubleshooting:
Reference files: -- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)-- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution: security content reference](sap-solution-security-content.md)
- [Kickstart script reference](reference-kickstart.md) - [Update script reference](reference-update.md) - [Systemconfig.ini file reference](reference-systemconfig.md)
sentinel Sap Solution Deploy Alternate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-deploy-alternate.md
Title: Microsoft Sentinel SAP data connector expert configuration options, on-premises deployment, and SAPControl log sources | Microsoft Docs
+ Title: Microsoft Sentinel Threat Monitoring for SAP data connector expert configuration options, on-premises deployment, and SAPControl log sources | Microsoft Docs
description: Learn how to deploy the Microsoft Sentinel data connector for SAP environments using expert configuration options and an on-premises machine. Also learn more about SAPControl log sources.
Last updated 02/22/2022
[!INCLUDE [Banner for top of topics](../includes/banner.md)]
-This article describes how to deploy the Microsoft Sentinel SAP data connector in an expert or custom process, such as using an on-premises machine and an Azure Key Vault to store your credentials.
+This article describes how to deploy the Microsoft Sentinel Threat Monitoring for SAP data connector in an expert or custom process, such as using an on-premises machine and an Azure Key Vault to store your credentials.
> [!NOTE]
-> The default, and most recommended process for deploying the Microsoft Sentinel SAP data connector is by [using an Azure VM](deploy-data-connector-agent-container.md). This article is intended for advanced users.
+> The default, and most recommended process for deploying the Microsoft Sentinel Threat Monitoring for SAP data connector is by [using an Azure VM](deploy-data-connector-agent-container.md). This article is intended for advanced users.
> [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
> ## Prerequisites
-The basic prerequisites for deploying your Microsoft Sentinel SAP data connector are the same regardless of your deployment method.
+The basic prerequisites for deploying your Microsoft Sentinel Threat Monitoring for SAP data connector are the same regardless of your deployment method.
Make sure that your system complies with the prerequisites documented in the main [SAP data connector prerequisites document](prerequisites-for-deploying-sap-continuous-threat-monitoring.md) before you start. ## Create your Azure key vault
-Create an Azure key vault that you can dedicate to your Microsoft Sentinel SAP data connector.
+Create an Azure key vault that you can dedicate to your Microsoft Sentinel Threat Monitoring for SAP data connector.
Run the following command to create your Azure key vault and grant access to an Azure service principal:
We recommend that you perform this procedure after you have a key vault ready wi
1. On your on-premises machine, create a new folder with a meaningful name, and copy the SDK zip file into your new folder.
-1. Clone the Microsoft Sentinel solution GitHub repository onto your on-premises machine, and copy Microsoft Sentinel SAP solution **systemconfig.ini** file into your new folder.
+1. Clone the Microsoft Sentinel solution GitHub repository onto your on-premises machine, and copy Microsoft Sentinel Threat Monitoring for SAP solution **systemconfig.ini** file into your new folder.
For example:
We recommend that you perform this procedure after you have a key vault ready wi
docker logs ΓÇôf sapcon-[SID] ```
-1. Continue with deploying the **Microsoft Sentinel - Continuous Threat Monitoring for SAP** solution.
+1. Continue with deploying the **Microsoft Sentinel - Threat Monitoring for SAP** solution.
Deploying the solution enables the SAP data connector to display in Microsoft Sentinel and deploys the SAP workbook and analytics rules. When you're done, manually add and customize your SAP watchlists.
We recommend that you perform this procedure after you have a key vault ready wi
## Manually configure the SAP data connector
-The Microsoft Sentinel SAP solution data connector is configured in the **systemconfig.ini** file, which you cloned to your SAP data connector machine as part of the [deployment procedure](#perform-an-expert--custom-installation).
+The Microsoft Sentinel Threat Monitoring for SAP solution data connector is configured in the **systemconfig.ini** file, which you cloned to your SAP data connector machine as part of the [deployment procedure](#perform-an-expert--custom-installation).
The following code shows a sample **systemconfig.ini** file:
javatz = <SET_JAVA_TZ --Use ONLY GMT FORMAT-- example - For OS Timezone = NZST u
### Define the SAP logs that are sent to Microsoft Sentinel
-Add the following code to the Microsoft Sentinel SAP solution **systemconfig.ini** file to define the logs that are sent to Microsoft Sentinel.
+Add the following code to the Microsoft Sentinel Threat Monitoring for SAP solution **systemconfig.ini** file to define the logs that are sent to Microsoft Sentinel.
-For more information, see [Microsoft Sentinel SAP solution logs reference (public preview)](sap-solution-log-reference.md).
+For more information, see [Microsoft Sentinel Threat Monitoring for SAP solution logs reference (public preview)](sap-solution-log-reference.md).
```python ##############################################################
JAVAFilesLogs = False
### SAL logs connector settings
-Add the following code to the Microsoft Sentinel SAP data connector **systemconfig.ini** file to define other settings for SAP logs ingested into Microsoft Sentinel.
+Add the following code to the Microsoft Sentinel Threat Monitoring for SAP data connector **systemconfig.ini** file to define other settings for SAP logs ingested into Microsoft Sentinel.
For more information, see [Perform an expert / custom SAP data connector installation](#perform-an-expert--custom-installation).
For more information, see [Deploy the SAP solution](deploy-sap-security-content.
For more information, see: -- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)-- [Microsoft Sentinel SAP solution detailed SAP requirements](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)-- [Microsoft Sentinel SAP solution logs reference](sap-solution-log-reference.md)-- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)-- [Troubleshooting your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Deploy the Microsoft Sentinel Threat Monitoring for SAP data connector with SNC](configure-snc.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution detailed SAP requirements](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution logs reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution: security content reference](sap-solution-security-content.md)
+- [Troubleshooting your Microsoft Sentinel Threat Monitoring for SAP solution deployment](sap-deploy-troubleshoot.md)
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-log-reference.md
Title: Microsoft Sentinel SAP solution - data reference | Microsoft Docs
-description: Learn about the SAP logs, tables, and functions available from the Microsoft Sentinel SAP solution.
+ Title: Microsoft Sentinel Threat Monitoring for SAP solution - data reference | Microsoft Docs
+description: Learn about the SAP logs, tables, and functions available from the Microsoft Sentinel Threat Monitoring for SAP solution.
Last updated 02/22/2022
-# Microsoft Sentinel SAP solution data reference (public preview)
+# Microsoft Sentinel Threat Monitoring for SAP solution data reference (public preview)
[!INCLUDE [Banner for top of topics](../includes/banner.md)] > [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
> > Some logs, noted below, are not sent to Microsoft Sentinel by default, but you can manually add them as needed. For more information, see [Define the SAP logs that are sent to Microsoft Sentinel](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-microsoft-sentinel). >
-This article describes the functions, logs, and tables available as part of the Microsoft Sentinel SAP solution and its data connector. It is intended for advanced SAP users.
+This article describes the functions, logs, and tables available as part of the Microsoft Sentinel Threat Monitoring for SAP solution and its data connector. It is intended for advanced SAP users.
## Functions available from the SAP solution
SAPConnectorOverview(7d)
## Logs produced by the data connector agent
-This section describes the SAP logs available from the Microsoft Sentinel SAP data connector, including the table names in Microsoft Sentinel, the log purposes, and detailed log schemas. Schema field descriptions are based on the field descriptions in the relevant [SAP documentation](https://help.sap.com/).
+This section describes the SAP logs available from the Microsoft Sentinel Threat Monitoring for SAP data connector, including the table names in Microsoft Sentinel, the log purposes, and detailed log schemas. Schema field descriptions are based on the field descriptions in the relevant [SAP documentation](https://help.sap.com/).
For best results, use the Microsoft Sentinel functions listed below to visualize, access, and query the data.
For best results, refer to these tables using the name in the **Sentinel functio
For more information, see: - [Deploy the Microsoft Sentinel solution for SAP](deployment-overview.md)-- [Microsoft Sentinel SAP solution detailed SAP requirements](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)-- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution detailed SAP requirements](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy the Microsoft Sentinel Threat Monitoring for SAP data connector with SNC](configure-snc.md)
- [Expert configuration options, on-premises deployment, and SAPControl log sources](sap-solution-deploy-alternate.md)-- [Microsoft Sentinel SAP solution: built-in security content](sap-solution-security-content.md)-- [Troubleshooting your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution: built-in security content](sap-solution-security-content.md)
+- [Troubleshooting your Microsoft Sentinel Threat Monitoring for SAP solution deployment](sap-deploy-troubleshoot.md)
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-security-content.md
Title: Microsoft Sentinel SAP solution - security content reference | Microsoft Docs
-description: Learn about the built-in security content provided by the Microsoft Sentinel SAP solution.
+ Title: Microsoft Sentinel Threat Monitoring for SAP solution - security content reference | Microsoft Docs
+description: Learn about the built-in security content provided by the Microsoft Sentinel Threat Monitoring for SAP solution.
Last updated 04/27/2022
-# Microsoft Sentinel SAP solution: security content reference
+# Microsoft Sentinel Threat Monitoring for SAP solution: security content reference
[!INCLUDE [Banner for top of topics](../includes/banner.md)]
-This article details the security content available for the SAP continuous threat monitoring.
+This article details the security content available for the Microsoft Sentinel Threat Monitoring for SAP solution.
> [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
Available security content includes a built-in workbook and built-in analytics rules. You can also add SAP-related [watchlists](../watchlists.md) to use in your search, detection rules, threat hunting, and response playbooks.
Use the following built-in workbooks to visualize and monitor data ingested via
| **SAP - Persistency & Data Exfiltration** | Displays data such as: <br><br>Internet Communication Framework (ICF) services, including activations and deactivations and data about new services and service handlers <br><br> Insecure operations, including both function modules and programs <br><br>Direct access to sensitive tables | Uses data from the following logs: <br><br>[ABAPAuditLog_CL](sap-solution-log-reference.md#abap-security-audit-log) <br><br>[ABAPTableDataLog_CL](sap-solution-log-reference.md#abap-db-table-data-log)<br><br>[ABAPSpoolLog_CL](sap-solution-log-reference.md#abap-spool-log)<br><br>[ABAPSpoolOutputLog_CL](sap-solution-log-reference.md#apab-spool-output-log)<br><br>[Syslog](sap-solution-log-reference.md#abap-syslog) |
-For more information, see [Tutorial: Visualize and monitor your data](../monitor-your-data.md) and [Deploy SAP continuous threat monitoring](deployment-overview.md).
+For more information, see [Tutorial: Visualize and monitor your data](../monitor-your-data.md) and [Deploy Threat Monitoring for SAP](deployment-overview.md).
## Built-in analytics rules
-The following tables list the built-in [analytics rules](deploy-sap-security-content.md) that are included in the Microsoft Sentinel SAP solution, deployed from the Microsoft Sentinel Solutions marketplace.
+The following tables list the built-in [analytics rules](deploy-sap-security-content.md) that are included in the Microsoft Sentinel Threat Monitoring for SAP solution, deployed from the Microsoft Sentinel Solutions marketplace.
### Built-in SAP analytics rules for initial access
The following tables list the built-in [analytics rules](deploy-sap-security-con
## Available watchlists
-The following table lists the [watchlists](deploy-sap-security-content.md) available for the Microsoft Sentinel SAP solution, and the fields in each watchlist.
+The following table lists the [watchlists](deploy-sap-security-content.md) available for the Microsoft Sentinel Threat Monitoring for SAP solution, and the fields in each watchlist.
-These watchlists provide the configuration for the Microsoft Sentinel SAP Continuous Threat Monitoring solution. The [SAP watchlists](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Analytics/Watchlists) are available in the Microsoft Sentinel GitHub repository.
+These watchlists provide the configuration for the Microsoft Sentinel Threat Monitoring for SAP solution. The [SAP watchlists](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Analytics/Watchlists) are available in the Microsoft Sentinel GitHub repository.
| Watchlist name | Description and fields | | | |
These watchlists provide the configuration for the Microsoft Sentinel SAP Contin
For more information, see: -- [Deploying SAP continuous threat monitoring](deployment-overview.md)-- [Microsoft Sentinel SAP solution logs reference](sap-solution-log-reference.md)-- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Deploying Threat Monitoring for SAP](deployment-overview.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution logs reference](sap-solution-log-reference.md)
+- [Deploy the Microsoft Sentinel Threat Monitoring for SAP data connector with SNC](configure-snc.md)
- [Configuration file reference](configuration-file-reference.md)-- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)-- [Troubleshooting your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Prerequisites for deploying Threat Monitoring for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Troubleshooting your Microsoft Sentinel Threat Monitoring for SAP solution deployment](sap-deploy-troubleshoot.md)
sentinel Solution Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/solution-overview.md
+
+ Title: Microsoft Sentinel Threat Monitoring for SAP solution overview | Microsoft Docs
+description: This article introduces Microsoft Sentinel Threat Monitoring solution for SAP
+++ Last updated : 06/21/2022++
+# Microsoft Sentinel Threat Monitoring for SAP solution overview
+
+SAP systems pose a unique security challenge. SAP systems handle extremely sensitive information and are prime targets for attackers.
+
+Security operations teams have traditionally had very little visibility into SAP systems. An SAP system breach could result in stolen files, exposed data, or disrupted supply chain. Once an attacker is in the system, there are few controls to detect exfiltration or other bad acts. SAP activity needs to be correlated with other data across the organization for effective threat detection.
+
+To help close this gap, Microsoft Sentinel offers the Threat Monitoring for SAP solution. This comprehensive solution uses components at every level of Microsoft Sentinel to offer end-to-end detection, analysis, investigation, and response to threats in your SAP environment.
+
+## What Threat Monitoring for SAP does
+
+Microsoft Sentinel's Threat Monitoring for SAP solution continuously monitors SAP systems for threats at all layers - business logic, application, database, and OS.
+
+It analyzes SAP system data to detect threats such as privilege escalation, unapproved changes, and unauthorized access. It allows you to correlate SAP monitoring with other signals across your organization, and to build your own detections to monitor sensitive transactions and other business risks.
+ - Privilege escalation
+ - Unapproved changes
+ - Unauthorized access
+- Correlate SAP monitoring with other signals across your organization
+- Build your own detections to monitor sensitive transactions and other business risks
+
+## Solution details
+
+### Log sources
+
+The solution's data connector retrieves a wide variety of SAP Log Sources:
+- ABAP Security Audit Log
+- ABAP Change Documents Log
+- ABAP Spool Log
+- ABAP Spool Output Log
+- ABAP Job Log
+- ABAP Workflow Log
+- ABAP DB Table Data
+- SAP User Master Data
+- ABAP CR Log
+- ICM Logs
+- JAVA Webdispacher Logs
+- Syslog
+
+### Threat detection coverage
+
+- Suspicious privileges operations 
+ ΓÇô Privileged user creation
+ - Usage of break-glass users
+ - Unlocking a user and logging into to it from the same IP
+ - Assignment of sensitive roles and admin privileges
+ - User Unlocks and uses other users
+ - Critical Authorization Assignment 
+ 
+- Attempts to bypass SAP security mechanisms –
+ - Disabling audit logging (HANA and SAP)
+ - Execution of sensitive function modules
+ - Unlocking blocked transactions
+ - Debugging production systems
+ - Sensitive Tables Direct access by RFC
+ - RFC Execution of Sanative Function
+ - System Configuration Change,  Dynamic ABAP Program.
+
+- Backdoor creation  (persistency) 
+ - Creation of new internet facing interfaces (ICF)
+ - Directly accessing sensitive tables by remote-function-call
+ - Assigning new service handlers to ICF
+ - Execution of obsolete programs
+ - User Unlocks and uses other users.
+ 
+- Data exfiltration 
+ - Multiple files downloads
+ - Spool takeovers
+ - Allowing access to insecure FTP servers & connections from unauthorized hosts
+ - Dynamic RFC Destination
+ - HANA DB - User Admin Actions from DB level.
+ 
+- Initial Access
+ – Brute force
+ - Multiple logons from the same IP
+ - Privileged user logons from unexpected networks
+ - SPNego Replay Attack 
+
+## Next steps
+
+Learn more about the Microsoft Sentinel Threat Monitoring for SAP solutions:
+
+- [Deploy Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying Threat Monitoring for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md)
+- [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md)
+- [Deploy SAP security content](deploy-sap-security-content.md)
+- [Deploy the Microsoft Sentinel Threat Monitoring for SAP data connector with SNC](configure-snc.md)
+- [Enable and configure SAP auditing](configure-audit.md)
+- [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md)
+
+Troubleshooting:
+
+- [Troubleshoot your Microsoft Sentinel Threat Monitoring for SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Configure SAP Transport Management System](configure-transport.md)
+
+Reference files:
+
+- [Microsoft Sentinel Threat Monitoring for SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution: security content reference](sap-solution-security-content.md)
+- [Kickstart script reference](reference-kickstart.md)
+- [Update script reference](reference-update.md)
+- [Systemconfig.ini file reference](reference-systemconfig.md)
sentinel Update Sap Data Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/update-sap-data-connector.md
Last updated 03/02/2022
This article shows you how to update an already existing SAP data connector to its latest version. > [!IMPORTANT]
-> The Microsoft Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The Microsoft Sentinel Threat Monitoring for SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
If you have a Docker container already running with an earlier version of the SAP data connector, run the SAP data connector update script to get the latest features available.
The SAP data connector Docker container on your machine is updated.
Be sure to check for any other available updates, such as: - Relevant SAP change requests, in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR).-- Microsoft Sentinel SAP security content, in the **Microsoft Sentinel Continuous Threat Monitoring for SAP** solution.
+- Microsoft Sentinel Threat Monitoring for SAP security content, in the **Microsoft Sentinel Threat Monitoring for SAP** solution.
- Relevant watchlists, in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/Analytics/Watchlists). ## Next steps
-Learn more about the Microsoft Sentinel SAP solutions:
+Learn more about the Microsoft Sentinel Threat Monitoring for SAP solutions:
-- [Deploy Continuous Threat Monitoring for SAP](deployment-overview.md)-- [Prerequisites for deploying SAP continuous threat monitoring](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
+- [Deploy Threat Monitoring for SAP](deployment-overview.md)
+- [Prerequisites for deploying Threat Monitoring for SAP](prerequisites-for-deploying-sap-continuous-threat-monitoring.md)
- [Deploy SAP Change Requests (CRs) and configure authorization](preparing-sap.md) - [Deploy and configure the SAP data connector agent container](deploy-data-connector-agent-container.md) - [Deploy SAP security content](deploy-sap-security-content.md)-- [Deploy the Microsoft Sentinel SAP data connector with SNC](configure-snc.md)
+- [Deploy the Microsoft Sentinel Threat Monitoring for SAP data connector with SNC](configure-snc.md)
- [Enable and configure SAP auditing](configure-audit.md) - [Collect SAP HANA audit logs](collect-sap-hana-audit-logs.md) Troubleshooting: -- [Troubleshoot your Microsoft Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
+- [Troubleshoot your Microsoft Sentinel Threat Monitoring for SAP solution deployment](sap-deploy-troubleshoot.md)
- [Configure SAP Transport Management System](configure-transport.md) Reference files: -- [Microsoft Sentinel SAP solution data reference](sap-solution-log-reference.md)-- [Microsoft Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution data reference](sap-solution-log-reference.md)
+- [Microsoft Sentinel Threat Monitoring for SAP solution: security content reference](sap-solution-security-content.md)
- [Kickstart script reference](reference-kickstart.md) - [Update script reference](reference-update.md) - [Systemconfig.ini file reference](reference-systemconfig.md)
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
> > You can also contribute! Join us in the [Microsoft Sentinel Threat Hunters GitHub community](https://github.com/Azure/Azure-Sentinel/wiki).
+## June 2022
+
+- [Microsoft Purview Data Loss Prevention (DLP) integration in Microsoft Sentinel (Preview)](#microsoft-purview-data-loss-prevention-dlp-integration-in-microsoft-sentinel-preview)
+- [Incident update trigger for automation rules (Preview)](#incident-update-trigger-for-automation-rules-preview)
+
+### Microsoft Purview Data Loss Prevention (DLP) integration in Microsoft Sentinel (Preview)
+
+[Microsoft 365 Defender integration with Microsoft Sentinel](microsoft-365-defender-sentinel-integration.md) now includes the integration of Microsoft Purview DLP alerts and incidents in Microsoft Sentinel's incidents queue.
+
+With this feature, you will be able to do the following:
+
+- View all DLP alerts grouped under incidents in the Microsoft 365 Defender incident queue.
+
+- View intelligent inter-solution (DLP-MDE, DLP-MDO) and intra-solution (DLP-DLP) alerts correlated under a single incident.
+
+- Retain DLP alerts and incidents for **180 days**.
+
+- Hunt for compliance logs along with security logs under Advanced Hunting.
+
+- Take in-place administrative remediation actions on users, files, and devices.
+
+- Associate custom tags to DLP incidents and filter by them.
+
+- Filter the unified incident queue by DLP policy name, tag, Date, service source, incident status, and user.
+
+In addition to the native experience in the Microsoft 365 Defender Portal, customers will also be able to use the one-click Microsoft 365 Defender connector to [ingest and investigate DLP incidents in Microsoft Sentinel](/microsoft-365/security/defender/investigate-dlp).
++
+### Incident update trigger for automation rules (Preview)
+
+Automation rules are an essential tool for triaging your incidents queue, reducing the noise in it, and generally coping with the high volume of incidents in your SOC seamlessly and transparently. Previously you could create and run automation rules and playbooks that would run upon the creation of an incident, but your automation options were more limited past that point in the incident lifecycle.
+
+You can now create automation rules and playbooks that will run when incident fields are modified - for example, when an owner is assigned, when its status or severity is changed, or when alerts and comments are added.
+
+Learn more about the [update trigger in automation rules](automate-incident-handling-with-automation-rules.md).
+ ## May 2022 - [Relate alerts to incidents](#relate-alerts-to-incidents-preview)
service-bus-messaging Service Bus Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-managed-service-identity.md
Title: Managed identities for Azure resources with Service Bus description: This article describes how to use managed identities to access with Azure Service Bus entities (queues, topics, and subscriptions). Previously updated : 01/06/2022 Last updated : 06/23/2022
When an Azure role is assigned to an Azure AD security principal, Azure grants a
## Azure built-in roles for Azure Service Bus For Azure Service Bus, the management of namespaces and all related resources through the Azure portal and the Azure resource management API is already protected using the Azure RBAC model. Azure provides the below Azure built-in roles for authorizing access to a Service Bus namespace: -- [Azure Service Bus Data Owner](../role-based-access-control/built-in-roles.md#azure-service-bus-data-owner): Enables data access to Service Bus namespace and its entities (queues, topics, subscriptions, and filters)-- [Azure Service Bus Data Sender](../role-based-access-control/built-in-roles.md#azure-service-bus-data-sender): Use this role to give send access to Service Bus namespace and its entities.-- [Azure Service Bus Data Receiver](../role-based-access-control/built-in-roles.md#azure-service-bus-data-receiver): Use this role to give receiving access to Service Bus namespace and its entities.
+- [Azure Service Bus Data Owner](../role-based-access-control/built-in-roles.md#azure-service-bus-data-owner): Use this role to allow full access to Service Bus namespace and its entities (queues, topics, subscriptions, and filters)
+- [Azure Service Bus Data Sender](../role-based-access-control/built-in-roles.md#azure-service-bus-data-sender): Use this role to allow sending messages to Service Bus queues and topics.
+- [Azure Service Bus Data Receiver](../role-based-access-control/built-in-roles.md#azure-service-bus-data-receiver): Use this role to allow receiving messages from Service Bus queues and subscriptions.
## Resource scope Before you assign an Azure role to a security principal, determine the scope of access that the security principal should have. Best practices dictate that it's always best to grant only the narrowest possible scope.
Before you can use managed identities for Azure Resources to authorize Service B
- [Azure Resource Manager client libraries](../active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md) ## Grant permissions to a managed identity in Azure AD
-To authorize a request to the Service Bus service from a managed identity in your application, first configure Azure role-based access control (Azure RBAC) settings for that managed identity. Azure Service Bus defines Azure roles that encompass permissions for sending and reading from Service Bus. When the Azure role is assigned to a managed identity, the managed identity is granted access to Service Bus entities at the appropriate scope.
-
-For more information about assigning Azure roles, see [Authenticate and authorize with Azure Active Directory for access to Service Bus resources](authenticate-application.md#azure-built-in-roles-for-azure-service-bus).
+To authorize a request to the Service Bus service from a managed identity in your application, the managed identity needs to be added to a Service Bus RBAC role (Azure Service Bus Data Owner, Azure Service Bus Data Sender, Azure Service Bus Data Receiver) at the appropriate scope (subscription, resource group, or namespace). When the Azure role is assigned to a managed identity, the managed identity is granted access to Service Bus entities at the specified scope. For descriptions of Service Bus roles, see the [Azure built-in roles for Azure Service Bus](#azure-built-in-roles-for-azure-service-bus) section. For more information about assigning Azure roles, see [Authenticate and authorize with Azure Active Directory for access to Service Bus resources](authenticate-application.md#azure-built-in-roles-for-azure-service-bus).
## Use Service Bus with managed identities for Azure resources To use Service Bus with managed identities, you need to assign the identity the role and the appropriate scope. The procedure in this section uses a simple application that runs under a managed identity and accesses Service Bus resources.
service-bus-messaging Transport Layer Security Configure Client Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-configure-client-version.md
Previously updated : 04/22/2022 Last updated : 06/23/2022
The following sample shows how to enable TLS 1.2 in a .NET client using the Azur
await sender.SendMessagesAsync(new ServiceBusMessage($"Message for TLS check"))); } ```
+# [Java](#tab/java)
+The minimum Java version for messaging SDKs is Java 8. For Java 8 installations, the default TLS version is 1.2. For Java 11 and later, the default is TLS 1.3.
+
+Java Messaging SDKs use the default `SSLContext` from JDK. That's, if you configure JDK TLS using the system properties documented by the JVM, then Java messaging libraries implicitly use it. For example, For OpenJDK-based JVMs, you can use the system property `jdk.tls.client.protocols`. Example: `-Djdk.tls.client.protocols=TLSv1.2`.
+
+There are a few other ways to enable TLS 1.2 include the following one:
+
+```java
+sslSocket.setEnabledProtocols(new String[] {"TLSv1. 2"});
+```
service-fabric Service Fabric Cluster Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-capacity.md
Title: Service Fabric cluster capacity planning considerations
description: Node types, durability, reliability, and other things to consider when planning your Service Fabric cluster. Previously updated : 05/21/2020 Last updated : 06/23/2022 # Service Fabric cluster capacity planning considerations
The capacity needs of your cluster will be determined by your specific workload
#### Virtual machine sizing
-**For production workloads, the recommended VM size (SKU) is [Standard D2_V2](../virtual-machines/dv2-dsv2-series.md) (or equivalent) with a minimum of 50 GB of local SSD, 2 cores, and 4 GiB of memory.** A minimum of 50 GB local SSD is recommended, however some workloads (such as those running Windows containers) will require larger disks. When choosing other [VM sizes](../virtual-machines/sizes-general.md) for production workloads, keep in mind the following constraints:
+**For production workloads, the recommended VM size (SKU) is [Standard D2_V2](../virtual-machines/dv2-dsv2-series.md) (or equivalent) with a minimum of 50 GB of local SSD, 2 cores, and 4 GiB of memory.** A minimum of 50 GB local SSD is recommended, however some workloads (such as those running Windows containers) will require larger disks.
+
+By default, local SSD is configured to 64 GB. This can be configured in the MaxDiskQuotaInMB setting of the Diagnostics section of cluster settings.
+
+For instructions on how to adjust the cluster settings of a cluster hosted in Azure, see [Upgrade the configuration of a cluster in Azure](/azure/service-fabric/service-fabric-cluster-config-upgrade-azure#customize-cluster-settings-using-resource-manager-templates)
+
+For instructions on how to adjust the cluster settings of a standalone cluster hosted in Windows, see [Upgrade the configuration of a standalone cluster](/azure/service-fabric/service-fabric-cluster-config-upgrade-windows-server#customize-cluster-settings-in-the-clusterconfigjson-file)
+
+When choosing other [VM sizes](../virtual-machines/sizes-general.md) for production workloads, keep in mind the following constraints:
- Partial / single core VM sizes like Standard A0 are not supported. - *A-series* VM sizes are not supported for performance reasons.
service-fabric Service Fabric Sfctl Chaos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-chaos.md
To get the next segment of the Chaos events, you can specify the ContinuationTok
|Argument|Description| | | | | --continuation-token | The continuation token parameter is used to obtain next set of results. A continuation token with a non-empty value is included in the response of the API when the results from the system do not fit in a single response. When this value is passed to the next API call, the API returns next set of results. If there are no further results, then the continuation token does not contain a value. The value of this parameter should not be URL encoded. |
-| --end-time-utc | The Windows file time representing the end time of the time range for which a Chaos report is to be generated. Consult [DateTime.ToFileTimeUtc Method](https\://msdn.microsoft.com/library/system.datetime.tofiletimeutc(v=vs.110).aspx) for details. |
+| --end-time-utc | The Windows file time representing the end time of the time range for which a Chaos report is to be generated. Consult [DateTime.ToFileTimeUtc Method](/dotnet/api/system.datetime.tofiletimeutc) for details. |
| --max-results | The maximum number of results to be returned as part of the paged queries. This parameter defines the upper bound on the number of results returned. The results returned can be less than the specified maximum results if they do not fit in the message as per the max message size restrictions defined in the configuration. If this parameter is zero or not specified, the paged query includes as many results as possible that fit in the return message. |
-| --start-time-utc | The Windows file time representing the start time of the time range for which a Chaos report is to be generated. Consult [DateTime.ToFileTimeUtc Method](https\://msdn.microsoft.com/library/system.datetime.tofiletimeutc(v=vs.110).aspx) for details. |
+| --start-time-utc | The Windows file time representing the start time of the time range for which a Chaos report is to be generated. Consult [DateTime.ToFileTimeUtc Method](/dotnet/api/system.datetime.tofiletimeutc) for details. |
| --timeout -t | The server timeout for performing the operation in seconds. This timeout specifies the time duration that the client is willing to wait for the requested operation to complete. The default value for this parameter is 60 seconds. Default\: 60. | ### Global Arguments
service-health Resource Health Checks Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-checks-resource-types.md
Below is a complete list of all the checks executed through resource health by r
|| |<ul><li>Are the load balancing endpoints available?</li></ul>| +
+## Microsoft.network/natGateways
+|Executed Checks|
+||
+|<ul><li>Are the NAT gateway endpoints available?</li></ul>|
+ ## Microsoft.network/trafficmanagerprofiles |Executed Checks| ||
Below is a complete list of all the checks executed through resource health by r
## Next Steps - See [Introduction to Azure Service Health dashboard](service-health-overview.md) and [Introduction to Azure Resource Health](resource-health-overview.md) to understand more about them. - [Frequently asked questions about Azure Resource Health](resource-health-faq.yml)-- Set up alerts so you are notified of health issues. For more information, see [Configure Alerts for service health events](./alerts-activity-log-service-notifications-portal.md).
+- Set up alerts so you are notified of health issues. For more information, see [Configure Alerts for service health events](./alerts-activity-log-service-notifications-portal.md).
site-recovery Transport Layer Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/transport-layer-security.md
If the machine is running earlier versions of Windows, ensure to install the cor
|Operating system |KB article | |||
-|Windows Server 2008 SP2 | <https://support.microsoft.com/help/4019276> |
-|Windows Server 2008 R2, Windows 7, Windows Server 2012 | <https://support.microsoft.com/help/3140245> |
+|Windows Server 2008 SP2 | <https://support.microsoft.com/help/4019276> |
+|Windows Server 2008 R2, Windows 7, Windows Server 2012 | <https://support.microsoft.com/help/3140245> |
>[!NOTE] >The update installs the required components for the protocol. After installation, to enable the required protocols, ensure to update the registry keys as mentioned in the above KB articles.
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
**Supported release** | **Mobility service version** | **Kernel version** | | | |
-14.04 LTS | [9.42](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [9.43](https://support.microsoft.com/topic/update-rollup-56-for-azure-site-recovery-kb5005376-33f27950-1a07-43e5-bf40-4b380a270ef6), [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094), [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure |
+14.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094), [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure |
|||
-16.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094), [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.4.0-21-generic to 4.4.0-206-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-140-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1111-azure |
+16.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094), [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.4.0-21-generic to 4.4.0-210-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic, 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-142-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1113-azure </br> 4.15.0-101-generic to 4.15.0-107-generic |
|||
-18.04 LTS |[9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1130-azure </br> 4.15.0-1131-azure </br> 4.15.0-167-generic </br> 4.15.0-169-generic </br> 5.4.0-100-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic |
+18.04 LTS |[9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1009-azure to 4.15.0-1138-azure </br> 4.15.0-101-generic to 4.15.0-177-generic </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.3.0-19-generic to 5.3.0-76-generic </br> 5.4.0-1020-azure to 5.4.0-1078-azure </br> 5.4.0-37-generic to 5.4.0-110-generic |
18.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 4.15.0-1126-azure </br> 4.15.0-1127-azure </br> 4.15.0-1129-azure </br> 4.15.0-162-generic </br> 4.15.0-163-generic </br> 4.15.0-166-generic </br> 5.4.0-1063-azure </br> 5.4.0-1064-azure </br> 5.4.0-1065-azure </br> 5.4.0-90-generic </br> 5.4.0-91-generic </br> 5.4.0-92-generic | 18.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.15.0-1123-azure </br> 4.15.0-1124-azure </br> 4.15.0-1125-azure</br> 4.15.0-156-generic </br> 4.15.0-158-generic </br> 4.15.0-159-generic </br> 4.15.0-161-generic </br> 5.4.0-1058-azure </br> 5.4.0-1059-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-84-generic </br> 5.4.0-86-generic </br> 5.4.0-87-generic </br> 5.4.0-89-generic | 18.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.15.0-1123-azure </br> 5.4.0-1058-azure </br> 4.15.0-156-generic </br> 4.15.0-1125-azure </br> 4.15.0-161-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-89-generic | 18.04 LTS | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.15.0-20-generic to 4.15.0-140-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-65-generic </br> 5.3.0-19-generic to 5.3.0-72-generic </br> 5.4.0-37-generic to 5.4.0-70-generic </br> 4.15.0-1009-azure to 4.15.0-1111-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1036-azure </br> 5.3.0-1007-azure to 5.3.0-1035-azure </br> 5.4.0-1020-azure to 5.4.0-1043-azure </br> 4.15.0-1114-azure </br> 4.15.0-143-generic </br> 5.4.0-1047-azure </br> 5.4.0-73-generic </br> 4.15.0-1115-azure </br> 4.15.0-144-generic </br> 5.4.0-1048-azure </br> 5.4.0-74-generic </br> 4.15.0-1121-azure </br> 4.15.0-151-generic </br> 5.3.0-76-generic </br> 5.4.0-1055-azure </br> 5.4.0-80-generic </br> 4.15.0-147-generic </br> 4.15.0-153-generic </br> 5.4.0-1056-azure </br> 5.4.0-81-generic </br> 4.15.0-1122-azure </br> 4.15.0-154-generic | |||
-20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.4.0-100-generic </br> 5.4.0-1067-azure </br> 5.4.0-1068-azure </br> 5.4.0-1069-azure </br> 5.4.0-1070-azure </br> 5.4.0-94-generic </br> 5.4.0-96-generic </br> 5.4.0-97-generic </br> 5.4.0-99-generic |
+20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.4.0-26-generic to 5.4.0-110-generic </br> 5.4.0-1010-azure to 5.4.0-1078-azure </br> 5.8.0-1033-azure to 5.8.0-1043-azure </br> 5.8.0-23-generic to 5.8.0-63-generic </br> 5.11.0-22-generic to 5.11.0-46-generic </br> 5.11.0-1007-azure to 5.11.0-1028-azure |
20.04 LTS |[9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 5.4.0-1063-azure </br> 5.4.0-1064-azure </br> 5.4.0-1065-azure </br> 5.4.0-90-generic </br> 5.4.0-91-generic </br> 5.4.0-92-generic | 20.04 LTS |[9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 5.4.0-1058-azure </br> 5.4.0-1059-azure </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure </br> 5.4.0-84-generic </br> 5.4.0-86-generic </br> 5.4.0-88-generic </br> 5.4.0-89-generic | 20.04 LTS |[9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 5.4.0-1058-azure </br> 5.4.0-84-generic </br> 5.4.0-1061-azure </br> 5.4.0-1062-azure |
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
**Supported release** | **Mobility service version** | **Kernel version** | | | |
-Debian 7 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094), [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64 |
+Debian 7 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094), [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64 |
|||
-Debian 8 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.11-amd64 |
+Debian 8 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), [9.47](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8), [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.12-amd64 |
|||
+Debian 9.1 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.9.0-17-amd64 to 4.9.0-19-amd64 </br> 4.19.0-0.bpo.19-cloud-amd64 </br>
Debian 9.1 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | 4.9.0-17-amd64 </br> Debian 9.1 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64 </br> 4.19.0-0.bpo.18-amd64 </br> 4.19.0-0.bpo.18-cloud-amd64 Debian 9.1 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64 </br> 4.19.0-0.bpo.18-amd64 </br> 4.19.0-0.bpo.18-cloud-amd64 </br> Debian 9.1 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.9.0-1-amd64 to 4.9.0-15-amd64 </br> 4.19.0-0.bpo.1-amd64 to 4.19.0-0.bpo.16-amd64 </br> 4.19.0-0.bpo.1-cloud-amd64 to 4.19.0-0.bpo.16-cloud-amd64 </br> |||
-Debian 10 | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | Not supported.
-Debian 10 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | Not supported.
+Debian 10 | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.19.0-19-cloud-amd64, 4.19.0-20-cloud-amd64 </br> 4.19.0-19-amd64, 4.19.0-20-amd64
+Debian 10 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | No new kernels supported.
Debian 10 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.9.0-1-amd64 to 4.9.0-15-amd64 <br/> 4.19.0-18-amd64 </br> 4.19.0-18-cloud-amd64 Debian 10 | [9.45](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d), [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 4.9.0-1-amd64 to 4.9.0-15-amd64 <br/> 4.19.0-18-amd64 </br> 4.19.0-18-cloud-amd64 Debian 10 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 4.19.0-6-amd64 to 4.19.0-16-amd64 </br> 4.19.0-6-cloud-amd64 to 4.19.0-16-cloud-amd64
Debian 10 | [9.44](https://support.microsoft.com/topic/update-rollup-57-for-azur
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br> 4.12.14-16.85-azure:5 </br> 4.12.14-16.88-azure:5 </br> 4.12.14-122.106-default:5 </br> 4.12.14-122.110-default:5 |
+SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br> 4.12.14-16.85-azure:5 </br> 4.12.14-16.88-azure:5 </br> 4.12.14-122.106-default:5 </br> 4.12.14-122.110-default:5 </br> 4.12.14-122.113-default:5 </br> 4.12.14-122.116-default:5 </br> 4.12.14-122.12-default:5 </br> 4.12.14-122.121-default:5 |
SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-16.80-azure </br> 4.12.14-122.103-default </br> 4.12.14-122.98-default5 | SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.65-azure </br> 4.12.14-16.68-azure </br> 4.12.14-16.73-azure </br> 4.12.14-16.76-azure </br> 4.12.14-122.88-default </br> 4.12.14-122.91-default | SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.45](https://support.microsoft.com/en-us/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.43-azure </br> 4.12.14-16.7-azure to 4.12.14-16.65-azure </br> 4.12.14-16.68-azure </br> 4.12.14-16.76-azure |
SUSE Linux Enterprise Server 12 (SP1,SP2,SP3,SP4, SP5) | [9.44](https://support.
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 15, SP1, SP2 | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.3.18-150300.38.37-azure:3 </br> 5.3.18-150300.38.40-azure:3 </br> 5.3.18-38.34-azure:3 </br> 5.3.18-150300.59.43-default:3 </br> 5.3.18-150300.59.46-default:3 </br> 5.3.18-150300.59.49-default:3 </br> 5.3.18-59.40-default:3 |
+SUSE Linux Enterprise Server 15, SP1, SP2 | [9.48](https://support.microsoft.com/en-us/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.3.18-150300.38.37-azure:3 </br> 5.3.18-150300.38.40-azure:3 </br> 5.3.18-38.34-azure:3 to 5.3.18-59.40-default:3 </br> 5.3.18-150300.59.43-default:3 tp 5.3.18-150300.59.68-default:3 |
SUSE Linux Enterprise Server 15, SP1, SP2 | [9.47](https://support.microsoft.com/topic/update-rollup-60-for-azure-site-recovery-k5011122-883a93a7-57df-4b26-a1c4-847efb34a9e8) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 5.3.18-18.72-azure: </br> 5.3.18-18.75-azure: </br> 5.3.18-24.93-default </br> 5.3.18-24.96-default </br> 5.3.18-36-azure </br> 5.3.18-38.11-azure </br> 5.3.18-38.14-azure </br> 5.3.18-38.17-azure </br> 5.3.18-38.22-azure </br> 5.3.18-38.25-azure </br> 5.3.18-38.28-azure </br> 5.3.18-38.3-azure </br> 5.3.18-38.31-azure </br> 5.3.18-38.8-azure </br> 5.3.18-57-default </br> 5.3.18-59.10-default </br> 5.3.18-59.13-default </br> 5.3.18-59.16-default </br> 5.3.18-59.19-default </br> 5.3.18-59.24-default </br> 5.3.18-59.27-default </br> 5.3.18-59.30-default </br> 5.3.18-59.34-default </br> 5.3.18-59.37-default </br> 5.3.18-59.5-default | SUSE Linux Enterprise Server 15, SP1, SP2 | [9.46](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.66-azure </br> 5.3.18-18.69-azure </br> 5.3.18-24.83-default </br> 5.3.18-24.86-default | SUSE Linux Enterprise Server 15, SP1, SP2 | [9.45](https://support.microsoft.com/en-us/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | By default, all [stock SUSE 15, SP1, SP2 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.12.14-5.5-azure to 4.12.14-5.47-azure </br></br> 4.12.14-8.5-azure to 4.12.14-8.55-azure </br> 5.3.18-16-azure </br> 5.3.18-18.5-azure to 5.3.18-18.58-azure </br> 5.3.18-18.69-azure
spring-cloud Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/faq.md
If you encounter any issues with Azure Spring Apps, create an [Azure Support Req
### How do I get VMware Spring Runtime support (Enterprise tier only)
-Enterprise tier has built-in VMware Spring Runtime Support, so you can open support tickets to [VMware](https://aka.ms/ascevsrsupport) if you think your issue is in the scope of VMware Spring Runtime Support. To better understand VMware Spring Runtime Support itself, see <https://tanzu.vmware.com/spring-runtime>. To understand the details about how to register and use this support service, see the Support section in the [Enterprise tier FAQ from VMware](https://aka.ms/EnterpriseTierFAQ). For any other issues, open support tickets with Microsoft.
+Enterprise tier has built-in VMware Spring Runtime Support, so you can open support tickets to [VMware](https://aka.ms/ascevsrsupport) if you think your issue is in the scope of VMware Spring Runtime Support. To better understand VMware Spring Runtime Support itself, see the [VMware Spring Runtime](https://tanzu.vmware.com/spring-runtime). To understand the details about how to register and use this support service, see the Support section in the [Enterprise tier FAQ from VMware](https://aka.ms/EnterpriseTierFAQ). For any other issues, open support tickets with Microsoft.
> [!IMPORTANT] > After you create an Enterprise tier instance, your entitlement will be ready within three business days. If you encounter any exceptions, raise a support ticket with Microsoft to get help with it.
spring-cloud How To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-application-insights.md
Previously updated : 06/08/2022 Last updated : 06/20/2022 zone_pivot_groups: spring-cloud-tier-selection
The following sections describe how to automate your deployment using Bicep, Azu
To deploy using a Bicep file, copy the following content into a *main.bicep* file. For more information, see [Microsoft.AppPlatform Spring/monitoringSettings](/azure/templates/microsoft.appplatform/spring/monitoringsettings). ```bicep
+param springName string
param location string = resourceGroup().location
-resource customize_this 'Microsoft.AppPlatform/Spring@2020-07-01' = {
- name: 'customize this'
+resource spring 'Microsoft.AppPlatform/Spring@2020-07-01' = {
+ name: springName
location: location properties: {} }
-resource customize_this_default 'Microsoft.AppPlatform/Spring/monitoringSettings@2020-11-01-preview' = {
- parent: customize_this
+resource monitorSetting 'Microsoft.AppPlatform/Spring/monitoringSettings@2020-11-01-preview' = {
+ parent: spring
name: 'default' properties: { appInsightsInstrumentationKey: '00000000-0000-0000-0000-000000000000'
To deploy using an ARM template, copy the following content into an *azuredeploy
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": {
+ "springName": {
+ "type": "string"
+ },
"location": { "type": "string", "defaultValue": "[resourceGroup().location]"
To deploy using an ARM template, copy the following content into an *azuredeploy
{ "type": "Microsoft.AppPlatform/Spring", "apiVersion": "2020-07-01",
- "name": "customize this",
+ "name": "[parameters('springName')]",
"location": "[parameters('location')]", "properties": {} }, { "type": "Microsoft.AppPlatform/Spring/monitoringSettings", "apiVersion": "2020-11-01-preview",
- "name": "[format('{0}/{1}', 'customize this', 'default')]",
+ "name": "[format('{0}/{1}', parameters('springName'), 'default')]",
"properties": { "appInsightsInstrumentationKey": "00000000-0000-0000-0000-000000000000", "appInsightsSamplingRate": 88 }, "dependsOn": [
- "[resourceId('Microsoft.AppPlatform/Spring', 'customize this')]"
+ "[resourceId('Microsoft.AppPlatform/Spring', parameters('springName'))]"
] } ] }- ``` ### Terraform
spring-cloud Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/troubleshoot.md
You can view the billing account for your subscription if you have admin access.
### I need VMware Spring Runtime Support (Enterprise tier only)
-Enterprise tier has built-in VMware Spring Runtime Support, so you can open support tickets to [VMware](https://aka.ms/ascevsrsupport) if you think your issue is in the scope of VMware Spring Runtime Support. To better understand VMware Spring Runtime Support itself, see <https://tanzu.vmware.com/spring-runtime>. To understand the details about how to register and use this support service, see the Support section in the [Enterprise tier FAQ from VMware](https://aka.ms/EnterpriseTierFAQ). For any other issues, open support tickets with Microsoft.
+Enterprise tier has built-in VMware Spring Runtime Support, so you can open support tickets to [VMware](https://aka.ms/ascevsrsupport) if you think your issue is in the scope of VMware Spring Runtime Support. To better understand VMware Spring Runtime Support itself, see the [VMware Spring Runtime](https://tanzu.vmware.com/spring-runtime). For more information on registering and using this support service, see the Support section in the [Enterprise tier FAQ from VMware](https://aka.ms/EnterpriseTierFAQ). For any other issues, open a support ticket with Microsoft.
## Next steps
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
Previously updated : 06/03/2022 Last updated : 06/23/2022
The following clients are known to be incompatible with SFTP for Azure Blob Stor
## Integrations -- Change feed and Event Grid notifications aren't supported.-
+- Change feed notifications aren't supported.
- Network File System (NFS) 3.0 and SFTP can't be enabled on the same storage account. ## Performance
storage Soft Delete Blob Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/soft-delete-blob-overview.md
For more information on how to restore soft-deleted objects, see [Manage and res
> [!IMPORTANT] > Versioning is not supported for accounts that have a hierarchical namespace.
-If blob versioning and blob soft delete are both enabled for a storage account, then overwriting a blob automatically creates a new version. The new version isn't soft-deleted and isn't removed when the soft-delete retention period expires. No soft-deleted snapshots are created. When you delete a blob, the current version of the blob becomes a previous version, and there's no longer a current version. No new version is created and no soft-deleted snapshots are created.
+If blob versioning and blob soft delete are both enabled for a storage account, then overwriting a blob automatically creates a new previous version that reflects the blob's state before the write operation. The new version isn't soft-deleted and isn't removed when the soft-delete retention period expires. No soft-deleted snapshots are created.
-Enabling soft delete and versioning together protects blob versions from deletion. When soft delete is enabled, deleting a version creates a soft-deleted version. You can use the **Undelete Blob** operation to restore soft-deleted versions during the soft delete retention period. The **Undelete Blob** operation always restores all soft-deleted versions of the blob. It isn't possible to restore only a single soft-deleted version.
+If blob versioning and blob soft delete are both enabled for a storage account, then when you delete a blob, the current version of the blob becomes a previous version, and there's no longer a current version. No new version is created and no soft-deleted snapshots are created. All previous versions are retained until they are explicitly deleted, either with a direct delete operation or via a lifecycle management policy.
-After the soft-delete retention period has elapsed, any soft-deleted blob versions are permanently deleted.
+Enabling soft delete and versioning together protects previous blob versions as well as current versions from deletion. When soft delete is enabled, explicitly deleting a previous version creates a soft-deleted version that is retained until the soft-delete retention period elapses. After the soft-delete retention period has elapsed, the soft-deleted blob version is permanently deleted.
+
+You can use the **Undelete Blob** operation to restore soft-deleted versions during the soft-delete retention period. The **Undelete Blob** operation always restores all soft-deleted versions of the blob. It isn't possible to restore only a single soft-deleted version.
> [!NOTE] > Calling the **Undelete Blob** operation on a deleted blob when versioning is enabled restores any soft-deleted versions or snapshots, but does not restore the current version. To restore the current version, promote a previous version by copying it to the current version.
storage Versioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versioning-overview.md
This table shows how this feature is supported in your account and the impact on
- [Enable and manage blob versioning](versioning-enable.md) - [Creating a snapshot of a blob](/rest/api/storageservices/creating-a-snapshot-of-a-blob)-- [Soft delete for Azure Storage Blobs](./soft-delete-blob-overview.md)
+- [Soft delete for blobs](./soft-delete-blob-overview.md)
+- [Soft delete for containers](soft-delete-container-overview.md)
storage Storage Account Recover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-recover.md
Previously updated : 07/06/2021 Last updated : 06/23/2022
If the deleted storage account used customer-managed keys with Azure Key Vault a
## Recover a deleted account from the Azure portal
-To recover a deleted storage account from within another storage account, follow these steps:
+To restore a deleted storage account from within another storage account, follow these steps:
-1. Navigate to the overview page for an existing storage account in the Azure portal.
-1. In the **Support + troubleshooting** section, select **Recover deleted account**.
+1. Navigate to the list of your storage accounts in the Azure portal.
+1. Select the **Restore** button to open the **Restore deleted account** pane.
+1. Select the subscription for the account that you want to recover from the **Subscription** drop-down.
1. From the dropdown, select the account to recover, as shown in the following image. If the storage account that you want to recover is not in the dropdown, then it cannot be recovered. :::image type="content" source="media/storage-account-recover/recover-account-portal.png" alt-text="Screenshot showing how to recover storage account in Azure portal":::
-1. Select the **Recover** button to restore the account. The portal displays a notification that the recovery is in progress.
+1. Select the **Restore** button to recover the account. The portal displays a notification that the recovery is in progress.
## Recover a deleted account via a support ticket
storage Storage Rest Api Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-rest-api-auth.md
private static string GetCanonicalizedHeaders(HttpRequestMessage httpRequestMess
### Canonicalized resource This part of the signature string represents the storage account targeted by the request. Remember that the Request URI is
-`<http://contosorest.blob.core.windows.net/?comp=list>`, with the actual account name (`contosorest` in this case). In this example, this is returned:
+`http://contosorest.blob.core.windows.net/?comp=list`, with the actual account name (`contosorest` in this case). In this example, this is returned:
``` /contosorest/\ncomp:list
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md
Azure File Sync works with your standard AD-based identity without any special s
Even though changes made directly to the Azure file share will take longer to sync to the server endpoints in the sync group, you may also want to ensure that you can enforce your AD permissions on your file share directly in the cloud as well. To do this, you must domain join your storage account to your on-premises AD, just like how your Windows file servers are domain joined. To learn more about domain joining your storage account to a customer-owned Active Directory, see [Azure Files Active Directory overview](../files/storage-files-active-directory-overview.md?toc=%2fazure%2fstorage%2ffilesync%2ftoc.json).
-> [!Important]
+> [!Important]
> Domain joining your storage account to Active Directory is not required to successfully deploy Azure File Sync. This is a strictly optional step that allows the Azure file share to enforce on-premises ACLs when users mount the Azure file share directly. ## Networking The Azure File Sync agent communicates with your Storage Sync Service and Azure file share using the Azure File Sync REST protocol and the FileREST protocol, both of which always use HTTPS over port 443. SMB is never used to upload or download data between your Windows Server and the Azure file share. Because most organizations allow HTTPS traffic over port 443, as a requirement for visiting most websites, special networking configuration is usually not required to deploy Azure File Sync.
+> [!Important]
+> Azure File Sync does not support internet routing. The default network routing option, Microsoft routing, is supported by Azure File Sync.
+ Based on your organization's policy or unique regulatory requirements, you may require more restrictive communication with Azure, and therefore Azure File Sync provides several mechanisms for you configure networking. Based on your requirements, you can: -- Tunnel sync and file upload/download traffic over your ExpressRoute or Azure VPN.
+- Tunnel sync and file upload/download traffic over your ExpressRoute or Azure VPN.
- Make use of Azure Files and Azure Networking features such as service endpoints and private endpoints. - Configure Azure File Sync to support your proxy in your environment. - Throttle network activity from Azure File Sync.
-> [!Important]
-> Azure File Sync does not support internet routing. The default network routing option, Microsoft routing, is supported by Azure File Sync.
+> [!Tip]
+> If you want to communicate with your Azure file share over SMB but port 445 is blocked, consider using SMB over QUIC, which offers zero-config "SMB VPN" for SMB access to your Azure file shares using the QUIC transport protocol over port 443. Although Azure Files does not directly support SMB over QUIC, you can create a lightweight cache of your Azure file shares on a Windows Server 2022 Azure Edition VM using Azure File Sync. To learn more about this option, see [SMB over QUIC with Azure File Sync](../files/storage-files-networking-overview.md#smb-over-quic).
To learn more about Azure File Sync and networking, see [Azure File Sync networking considerations](file-sync-networking-overview.md).
storsimple Storsimple 8000 Configure Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-8000-configure-web-proxy.md
An alternate way to configure web proxy settings is via the Windows PowerShell f
1. In the serial console menu, choose option 1, **Log in with full access**. When prompted, provide the **device administrator password**. The default password is `Password1`. 2. At the command prompt, type:
- `Set-HcsWebProxy -Authentication NTLM -ConnectionURI "<http://<IP address or FQDN of web proxy server>:<TCP port number>" -Username "<Username for web proxy server>"`
+ `Set-HcsWebProxy -Authentication NTLM -ConnectionURI "http://<IP address or FQDN of web proxy server>:<TCP port number>" -Username "<Username for web proxy server>"`
Provide and confirm the password when prompted.
synapse-analytics Concept Deep Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/concept-deep-learning.md
To learn more about how to run distributed training jobs in Azure Synapse Analyt
- [Tutorial: Distributed training with Horovod and PyTorch](./tutorial-horovod-pytorch.md) - [Tutorial: Distributed training with Horovod and Tensorflow](./tutorial-horovod-tensorflow.md)
-For more information about Horovod, you can visit the [Horovod documentation](https://horovod.readthedocs.io/stable/),
+For more information about Horovod, you can visit the [Horovod documentation](https://horovod.readthedocs.io/en/stable/),
### Petastorm Petastorm is an open source data access library which enables single-node or distributed training of deep learning models. This library enables training directly from datasets in Apache Parquet format and datasets that have already been loaded as an Apache Spark DataFrame. Petastorm supports popular training frameworks such as Tensorflow and PyTorch.
-For more information about Petastorm, you can visit the [Petastorm GitHub page](https://github.com/uber/petastorm) or the [Petastorm API documentation](https://petastorm.readthedocs.io/latest).
+For more information about Petastorm, you can visit the [Petastorm GitHub page](https://github.com/uber/petastorm) or the [Petastorm API documentation](https://petastorm.readthedocs.io/en/latest/).
## Next steps
synapse-analytics Tutorial Load Data Petastorm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-load-data-petastorm.md
Petastorm is an open source data access library which enables single-node or distributed training of deep learning models. This library enables training directly from datasets in Apache Parquet format and datasets that have already been loaded as an Apache Spark DataFrame. Petastorm supports popular training frameworks such as Tensorflow and PyTorch.
-For more information about Petastorm, you can visit the [Petastorm GitHub page](https://github.com/uber/petastorm) or the [Petastorm API documentation](https://petastorm.readthedocs.io/latest).
+For more information about Petastorm, you can visit the [Petastorm GitHub page](https://github.com/uber/petastorm) or the [Petastorm API documentation](https://petastorm.readthedocs.io/en/latest).
## Prerequisites
synapse-analytics Sql Data Warehouse Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-connect-overview.md
Title: Connect to a SQL pool in Azure Synapse
-description: Get connected to SQL pool.
+description: Learn how to connect to an SQL pool in Azure Synapse.
Previously updated : 04/17/2018 Last updated : 06/13/2022 -+
-# Connect to a SQL pool in Azure Synapse
+# Connect to a SQL pool in Azure Synapse
Get connected to a SQL pool in Azure Synapse. ## Find your server name
-The server name in the following example is sqlpoolservername.database.windows.net. To find the fully qualified server name:
+The server name in the following example is `sqlpoolservername.database.windows.net`. To find the fully qualified server name:
1. Go to the [Azure portal](https://portal.azure.com).
-2. Click on **Azure Synapse Analytics**.
-3. Click on the SQL pool you want to connect to.
+2. Select **Azure Synapse Analytics**.
+3. Select the SQL pool you want to connect to.
4. Locate the full server name. ![Full server name](media/sql-data-warehouse-connect-overview/server-connect.PNG) ## Supported drivers and connection strings
-SQL pool supports [ADO.NET](/dotnet/framework/data/adonet?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json), [ODBC](/sql/connect/odbc/windows/microsoft-odbc-driver-for-sql-server-on-windows?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true), [PHP](/sql/connect/php/overview-of-the-php-sql-driver?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true), and [JDBC](/sql/connect/jdbc/microsoft-jdbc-driver-for-sql-server?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true). To find the latest version and documentation, click on one of the preceding drivers.
+SQL pool works with various drivers. Select any of the following drivers for the latest documentation and version information: [ADO.NET](/dotnet/framework/data/adonet?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json), [ODBC](/sql/connect/odbc/windows/microsoft-odbc-driver-for-sql-server-on-windows?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true), [PHP](/sql/connect/php/overview-of-the-php-sql-driver?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true), and [JDBC](/sql/connect/jdbc/microsoft-jdbc-driver-for-sql-server?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true).
-To automatically generate the connection string for the driver that you are using from the Azure portal, click on the **Show database connection strings** from the preceding example. Following are also some examples of what a connection string looks like for each driver.
+You can automatically generate a connection string for your driver. Select a driver from the previous list and then select **Show database connection strings**.
> [!NOTE] > Consider setting the connection timeout to 300 seconds to allow your connection to survive short periods of unavailability.
+Here are examples of connection strings for popular drivers:
+ ### ADO.NET connection string example ```csharp
jdbc:sqlserver://yourserver.database.windows.net:1433;database=yourdatabase;user
## Connection settings
-SQL pool standardizes some settings during connection and object creation. These settings cannot be overridden and include:
+SQL pool standardizes certain settings during connection and object creation. These settings cannot be overridden. They include:
| SQL pool setting | Value | |: |: |
synapse-analytics Striim Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/striim-quickstart.md
Once deployed, click on \<VM Name>-masternode in the Azure portal, click Connect
![Connect Striim to Azure Synapse Analytics][connect]
-Download the sqljdbc42.jar from <https://www.microsoft.com/en-us/download/details.aspx?id=54671> to your local machine.
+Download the [Microsoft JDBC Driver 4.2 for SQL Server](https://www.microsoft.com/download/details.aspx?id=54671) file to your local machine.
-Open a command-line window, and change directories to where you downloaded the JDBC jar. SCP the jar file to your Striim VM, getting the address and password from the Azure portal
+Open a command-line window, and change directories to where you downloaded the JDBC driver. SCP the driver file to your Striim VM, getting the address and password from the Azure portal.
-![Copy jar file to your VM][copy-jar]
+![Copy driver file to your VM][copy-jar]
-Open another command-line window, or use an ssh utility to ssh into the Striim cluster
+Open another command-line window, or use an ssh utility to ssh into the Striim cluster.
![SSH into the cluster][ssh]
-Execute the following commands to move the JDBC jar file into StriimΓÇÖs lib directory, and start and stop the server.
+Execute the following commands to move the file into Striim's lib directory, and start and stop the server.
1. sudo su 2. cd /tmp
time-series-insights Time Series Insights Add Reference Data Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-add-reference-data-set.md
Reference data is not joined retroactively. Thus, only current and future ingres
### Learn about Time Series InsightΓÇÖs reference data model.</br>
-> [!VIDEO <https://www.youtube.com/embed/Z0NuWQUMv1o>]
+> [!VIDEO https://www.youtube.com/embed/Z0NuWQUMv1o]
## Add a reference data set
time-series-insights Time Series Insights Manage Reference Data Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-manage-reference-data-csharp.md
The sample code below demonstrates the following features:
Complete the following steps before you compile and run the sample code:
-1. [Provision a Gen 1 Azure Time Series Insights](./time-series-insights-get-started.md
-) environment.
+1. [Provision a Gen 1 Azure Time Series Insights](./time-series-insights-get-started.md) environment.
1. [Create a Reference Data set](time-series-insights-add-reference-data-set.md) within your environment. Use the following Reference Data scheme:
time-series-insights Time Series Insights Send Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-send-events.md
In Azure Time Series Insights Gen2, you can add contextual data to incoming tele
[![Copy the value for the primary key connection string](media/send-events/configure-sample-code-connection-string.png)](media/send-events/configure-sample-code-connection-string.png#lightbox)
-1. Go to <https://tsiclientsample.azurewebsites.net/windFarmGen.html>. The URL creates and runs simulated windmill devices.
+1. Navigate to the [TSI Sample Wind Farm Pusher](https://tsiclientsample.azurewebsites.net/windFarmGen.html). The site creates and runs simulated windmill devices.
1. In the **Event Hub Connection String** box on the webpage, paste the connection string that you copied in the [windmill input field](#push-events-to-windmills-sample). [![Paste the primary key connection string in the Event Hub Connection String box](media/send-events/configure-wind-mill-sim.png)](media/send-events/configure-wind-mill-sim.png#lightbox)
virtual-desktop Configure Host Pool Personal Desktop Assignment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-host-pool-personal-desktop-assignment-type.md
Update-AzWvdSessionHost -HostPoolName <hostpoolname> -Name <sessionhostname> -Re
To directly assign a user to a session host in the Azure portal:
-1. Sign in to the Azure portal at <https://portal.azure.com>.
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. Enter **Azure Virtual Desktop** into the search bar. 3. Under **Services**, select **Azure Virtual Desktop**. 4. At the Azure Virtual Desktop page, go the menu on the left side of the window and select **Host pools**.
Update-AzWvdSessionHost -HostPoolName <hostpoolname> -Name <sessionhostname> -Re
> - If the session host has no user assignment, nothing will happen when you run this cmdlet. To unassign a personal desktop in the Azure portal:
-1. Sign in to the Azure portal at <https://portal.azure.com>.
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. Enter **Azure Virtual Desktop** into the search bar. 3. Under **Services**, select **Azure Virtual Desktop**. 4. At the Azure Virtual Desktop page, go the menu on the left side of the window and select **Host pools**.
Update-AzWvdSessionHost -HostPoolName <hostpoolname> -Name <sessionhostname> -Re
> - If the session host currently has no user assignment, the personal desktop will be assigned to the provided UPN. To reassign a personal desktop in the Azure portal:
-1. Sign in to the Azure portal at <https://portal.azure.com>.
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. Enter **Azure Virtual Desktop** into the search bar. 3. Under **Services**, select **Azure Virtual Desktop**. 4. At the Azure Virtual Desktop page, go the menu on the left side of the window and select **Host pools**.
virtual-desktop Create Fslogix Profile Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-fslogix-profile-container.md
This section is based on [Create a profile container for a host pool using a fil
## Make sure users can access the Azure NetApp File share
-1. Open your internet browser and go to <https://rdweb.wvd.microsoft.com/arm/webclient>.
+1. Browse to <https://rdweb.wvd.microsoft.com/arm/webclient>.
2. Sign in with the credentials of a user assigned to the Remote Desktop group.
virtual-desktop Create Validation Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-validation-host-pool.md
You can configure any existing pooled or personal host pool to be a validation h
To use the Azure portal to configure your validation host pool:
-1. Sign in to the Azure portal at <https://portal.azure.com>.
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. Search for and select **Azure Virtual Desktop**. 3. In the Azure Virtual Desktop page, select **Host pools**. 4. Select the name of the host pool you want to edit.
virtual-desktop Customize Feed For Virtual Desktop Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/customize-feed-for-virtual-desktop-users.md
Update-AzWvdDesktop -ResourceGroupName <resourcegroupname> -ApplicationGroupName
You can change the display name for a published remote desktop by setting a friendly name using the Azure portal.
-1. Sign in to the Azure portal at <https://portal.azure.com>.
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. Search for **Azure Virtual Desktop**.
virtual-desktop Customize Rdp Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/customize-rdp-properties.md
Before you begin, follow the instructions in [Set up the Azure Virtual Desktop P
To configure RDP properties in the Azure portal:
-1. Sign in to Azure at <https://portal.azure.com>.
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. Enter **Azure Virtual Desktop** into the search bar. 3. Under Services, select **Azure Virtual Desktop**. 4. At the Azure Virtual Desktop page, select **host pools** in the menu on the left side of the screen.
virtual-desktop Install Office On Wvd Master Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/install-office-on-wvd-master-image.md
Here's how to install OneDrive in per-machine mode:
1. First, create a location to stage the OneDrive installer. A local disk folder or [\\\\unc] (file://unc) location is fine.
-2. Download OneDriveSetup.exe to your staged location with this link: <https://go.microsoft.com/fwlink/?linkid=844652>
+2. Download [OneDriveSetup.exe](https://go.microsoft.com/fwlink/?linkid=844652) to your staged location.
-3. If you installed office with OneDrive by omitting **\<ExcludeApp ID="OneDrive" /\>**, uninstall any existing OneDrive per-user installations from an elevated command prompt by running the following command:
+3. If you installed Office with OneDrive by omitting `<ExcludeApp ID="OneDrive" /`, uninstall any existing OneDrive per-user installations from an elevated command prompt by running the following command:
```cmd "[staged location]\OneDriveSetup.exe" /uninstall
virtual-desktop Powershell Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/powershell-module.md
Connect-AzAccount
> Connect-AzAccount -EnvironmentName AzureChinaCloud > ```
-Signing into your Azure account requires a code that's generated when you run the Connect cmdlet. To sign in, go to <https://microsoft.com/devicelogin>, enter the code, then sign in using your Azure admin credentials.
+Signing into your Azure account requires a code that's generated when you run the Connect cmdlet. Sign in via [device login](https://microsoft.com/devicelogin), enter the code, then sign in using your Azure admin credentials.
-```powershell
+```output
Account SubscriptionName TenantId Environment - - -- --
virtual-desktop Teams On Avd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-on-avd.md
Title: Microsoft Teams on Azure Virtual Desktop - Azure
description: How to use Microsoft Teams on Azure Virtual Desktop. Previously updated : 06/20/2022 Last updated : 06/23/2022
The following table lists the latest versions of the WebSocket Service:
### Updates for version 1.17.2205.23001 - Fixed an issue that made the WebRTC redirector service disconnect from Teams on Azure Virtual Desktop.
+- Added keyboard shortcut detection for Shift+Ctrl+; that lets users turn on a diagnostic overlay during calls on Teams for Azure Virtual Desktop. This feature is supported in version 1.2.3313 or later of the Windows Desktop client.
- Added further stability and reliability improvements to the service. #### Updates for version 1.4.2111.18001
Using Teams in a virtualized environment is different from using Teams in a non-
- The Teams desktop client in Azure Virtual Desktop environments doesn't support creating live events, but you can join live events. For now, we recommend you create live events from the [Teams web client](https://teams.microsoft.com) in your remote session instead. When watching a live event in the browser, [enable multimedia redirection (MMR) for Teams live events](multimedia-redirection.md#how-to-use-mmr-for-teams-live-events) for smoother playback. - Calls or meetings don't currently support application sharing. Desktop sessions support desktop sharing.-- Give control and take control aren't currently supported. - Due to WebRTC limitations, incoming and outgoing video stream resolution is limited to 720p. - The Teams app doesn't support HID buttons or LED controls with other devices. - New Meeting Experience (NME) is not currently supported in VDI environments.
For Teams known issues that aren't related to virtualized environments, see [Sup
- You can't configure audio devices from the Teams app, and the client will automatically use the default client audio device. To switch audio devices, you'll need to configure your settings from the client audio preferences instead. - Teams for Azure Virtual Desktop on macOS doesn't currently support background effects such as background blur and background images.
+- Give control and take control aren't currently supported.
## Collect Teams logs
virtual-desktop Connect Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/user-documentation/connect-web.md
While any HTML5-capable browser should work, we officially support the following
## Access remote resources feed
-In a browser, navigate to the Azure Resource Manager-integrated version of the Azure Virtual Desktop web client at <https://client.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html> and sign in with your user account.
+In a browser, navigate to the Azure Resource Manager-integrated version of the [Azure Virtual Desktop web client](https://client.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html) and sign in with your user account.
>[!IMPORTANT]
->We plan to start automatically redirecting to a new web client URL at <https://client.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html> as of April 18th, 2022. The current URLs at <https://rdweb.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html> and <https://www.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html> will still be available, but we recommend you update your bookmarks to the new URL at <https://client.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html> as soon as possible.
+>We plan to start automatically redirecting to a new web client URL at `https://client.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html` as of April 18th, 2022. The URLs at `https://rdweb.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html` and `https://www.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html` will still be available, but we recommend you update your bookmarks to the new URL at `https://client.wvd.microsoft.com/arm/webclient/https://docsupdatetracker.net/index.html` as soon as possible.
>[!NOTE] >If you're using Azure Virtual Desktop (classic) without Azure Resource Manager integration, connect to your resources at <https://client.wvd.microsoft.com/webclient/https://docsupdatetracker.net/index.html> instead. > > If you're using the US Gov portal, use <https://rdweb.wvd.azure.us/arm/webclient/https://docsupdatetracker.net/index.html>. >
-> To connect to the Azure China portal, use <https://rdweb.wvd.azure.cn/arm/webclient/https://docsupdatetracker.net/index.html>.
+> To connect to the Azure China portal, use `https://rdweb.wvd.azure.cn/arm/webclient/https://docsupdatetracker.net/index.html>.
>[!NOTE] >If you've already signed in with a different Azure Active Directory account than the one you want to use for Azure Virtual Desktop, you should either sign out or use a private browser window.
virtual-desktop Deploy Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/deploy-diagnostics.md
To set the Redirect URI:
> [!div class="mx-imgBorder"] > ![The redirect URI page](../media/redirect-uri-page.png)
-8. Now, go to your Azure resources, select the Azure App Services resource with the name you provided in the template and navigate to the URL associated with it. (For example, if the app name you used in the template was `contosoapp45`, then your associated URL is <http://contoso.azurewebsites.net>).
+8. Now, go to your Azure resources, select the Azure App Services resource with the name you provided in the template and navigate to the URL associated with it. (For example, if the app name you used in the template was `contosoapp45`, then your associated URL is `http://contoso.azurewebsites.net`).
9. Sign in using the appropriate Azure Active Directory user account. 10. Select **Accept**.
virtual-desktop Manage Resources Using Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/manage-resources-using-ui.md
Follow these instructions to deploy the Azure Resource Manager template:
- If you're deploying in a Cloud Solution Provider subscription, follow these instructions to deploy to Azure: 1. Scroll down and right-click **Deploy to Azure**, then select **Copy Link Location**. 2. Open a text editor like Notepad and paste the link there.
- 3. Right after <https://portal.azure.com/> and before the hashtag (#), enter an at sign (@) followed by the tenant domain name. Here's an example of the format: <https://portal.azure.com/@Contoso.onmicrosoft.com#create/>.
+ 3. Right after `https://portal.azure.com` and before the hashtag (`#`), enter an at sign (`@`) followed by the tenant domain name. For example: `https://portal.azure.com/@Contoso.onmicrosoft.com#create/`.
4. Sign in to the Azure portal as a user with Admin/Contributor permissions to the Cloud Solution Provider subscription. 5. Paste the link you copied to the text editor into the address bar. 3. When entering the parameters, do the following:
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Here's what changed in April 2022:
### Intune device configuration for Windows multisession now generally available
-Deploying Intune device configuration policies from Microsoft Endpoint Manager admin center to Windows multisession VMs on Azure Virtual Desktop is now generally available. Learn more at [Using Azure Virtual Desktop multi-session with Intune](/mem/intune/fundamentals/azure-virtual-desktop-multi-session) and[ our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/intune-device-configuration-for-azure-virtual-desktop-multi/ba-p/3294444).
+Deploying Intune device configuration policies from Microsoft Endpoint Manager admin center to Windows multisession VMs on Azure Virtual Desktop is now generally available. Learn more at [Using Azure Virtual Desktop multi-session with Intune](/mem/intune/fundamentals/azure-virtual-desktop-multi-session) and [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/intune-device-configuration-for-azure-virtual-desktop-multi/ba-p/3294444).
### Scheduled Agent Updates public preview
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
When you create a VM and add it to a Flexible scale set, you have full control o
The preferred method is to use Azure Resource Graph to query for all VMs in a Virtual Machine Scale Set. Azure Resource Graph provides efficient query capabilities for Azure resources at scale across subscriptions. ```
+resources
| where type =~ 'Microsoft.Compute/virtualMachines'
-| where properties.virtualMachineScaleSet contains "demo"
+| where properties.virtualMachineScaleSet.id contains "demo"
| extend powerState = properties.extended.instanceView.powerState.code | project name, resourceGroup, location, powerState | order by resourceGroup desc, name desc
virtual-machines Demo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/ibm/demo.md
Now that you have the package(s), you must upload them to your VM on Azure.
`/home/MyUserID/ZDT/adcd/nov2017/volumes`
-5. Upload the files using an SSH client such as[WinSCP](https://winscp.net/eng/index.php). Since SCP is a part of SSH , it uses port 22, which is what SSH uses. If your local computer is not Windows, you can type the [scp command](http://man7.org/linux/man-pages/man1/scp.1.html) in your SSH session.
+5. Upload the files using an SSH client such as [WinSCP](https://winscp.net/eng/index.php). Since SCP is a part of SSH , it uses port 22, which is what SSH uses. If your local computer is not Windows, you can type the [scp command](http://man7.org/linux/man-pages/man1/scp.1.html) in your SSH session.
6. Initiate the upload to the Azure VM directory you created, which becomes the image storage for zD&T.
virtual-machines Install Openframe Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/tmaxsoft/install-openframe-azure.md
You can set up the OpenFrame environment using various deployment patterns, but
**To create a VM**
-1. Go to the Azure portal at <https://portal.azure.com> and sign in to your account.
+1. Sign in to the [Azure portal](https://portal.azure.com).
2. Click **Virtual machines**.
When giving new individuals access the VM:
**To generate a public/private key pair**
-1. Download PuTTYgen from <https://www.putty.org/> and install it using all the default settings.
+1. Download [PuTTYgen](https://www.putty.org) and install it using all the default settings.
-2. To open PuTTYgen, locate the PuTTY installation directory in C:\\Program Files\\PuTTY.
+2. To open PuTTYgen, locate the PuTTY installation directory in `C:\Program Files\PuTTY`.
![PuTTY interface](media/puttygen-01.png)
Before installing JEUS, install the Apache Ant package, which provides the libra
http://<IP>:<port>/webadmin/login ```
- For example, <http://192.168.92.133:9736/webadmin/login.> The logon screen appears:
+ For example, `http://192.168.92.133:9736/webadmin/login`. The logon screen appears:
![JEUS WebAdmin logon screen](media/jeus-01.png)
virtual-machines Oracle Vm Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-vm-solutions.md
Oracle and Microsoft are collaborating to bring WebLogic Server to the Azure Mar
-Dweblogic.rjvm.enableprotocolswitch=true ```
-For related information, see KB article **860340.1** at <https://support.oracle.com>.
+For related information, see KB article **860340.1** at [support.oracle.com](https://support.oracle.com).
- **Dynamic clustering and load balancing limitations.** Suppose you want to use a dynamic cluster in Oracle WebLogic Server and expose it through a single, public load-balanced endpoint in Azure. This can be done as long as you use a fixed port number for each of the managed servers (not dynamically assigned from a range) and do not start more managed servers than there are machines the administrator is tracking. That is, there is no more than one managed server per virtual machine). If your configuration results in more Oracle WebLogic Servers being started than there are virtual machines (that is, where multiple Oracle WebLogic Server instances share the same virtual machine), then it is not possible for more than one of those instances of Oracle WebLogic Servers to bind to a given port number. The others on that virtual machine fail.
virtual-machines Jboss Eap On Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-on-rhel.md
To start JBoss EAP with a different configuration, use the `--server-config` arg
For a complete listing of all available startup script arguments and their purposes, use the `--help` argument. For more information, see [Server Runtime Arguments on EAP 7.2](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.2/html/configuration_guide/reference_material#reference_of_switches_and_arguments_to_pass_at_server_runtime) or [Server Runtime Arguments on EAP 7.3](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.3/html/configuration_guide/reference_material#reference_of_switches_and_arguments_to_pass_at_server_runtime).
-JBoss EAP can also work in cluster mode. JBoss EAP cluster messaging allows grouping of JBoss EAP messaging servers to share message processing load. Each active node in the cluster is an active JBoss EAP messaging server, which manages its own messages and handles its own connections. To learn more, see [Clusters Overview on EAP 7.2](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.2/html/configuring_messaging/clusters_overview) or [ Clusters Overview on EAP 7.3](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.3/html/configuring_messaging/clusters_overview).
+JBoss EAP can also work in cluster mode. JBoss EAP cluster messaging allows grouping of JBoss EAP messaging servers to share message processing load. Each active node in the cluster is an active JBoss EAP messaging server, which manages its own messages and handles its own connections. To learn more, see [Clusters Overview on EAP 7.2](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.2/html/configuring_messaging/clusters_overview) or [Clusters Overview on EAP 7.3](https://access.redhat.com/documentation/en/red_hat_jboss_enterprise_application_platform/7.3/html/configuring_messaging/clusters_overview).
## Support and subscription notes These Quickstart templates are offered as follows:
virtual-machines Dbms_Guide_Ibm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms_guide_ibm.md
# IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload With Microsoft Azure, you can migrate your existing SAP application running on IBM Db2 for Linux, UNIX, and Windows (LUW) to Azure virtual machines. With SAP on IBM Db2 for LUW, administrators and developers can still use the same development and administration tools, which are available on-premises.
-General information about running SAP Business Suite on IBM Db2 for LUW can be found in the SAP Community Network (SCN) at <https://www.sap.com/community/topic/db2-for-linux-unix-and-windows.html>.
+General information about running SAP Business Suite on IBM Db2 for LUW is available via the SAP Community Network (SCN) in [SAP on IBM Db2 for Linux, UNIX, and Windows](https://www.sap.com/community/topic/db2-for-linux-unix-and-windows.html).
For more information and updates about SAP on Db2 for LUW on Azure, see SAP Note [2233094].
-The are various articles on SAP workload on Azure released. It is recommended starting in [SAP workload on Azure - Get Started](./get-started.md) and then pick the area of interests
+Various articles on SAP workload on Azure have been published. We recommend beginning with [Get started with SAP on Azure VMs](./get-started.md) and then read about other areas of interest.
The following SAP Notes are related to SAP on Azure regarding the area covered in this document:
IBM Db2 LUW 11.5 released support for 4-KB sector size. Though you need to enabl
For older Db2 versions, a 512-Byte sector size must be used. Premium SSD disks are 4-KB native and have 512-Byte emulation. Ultra disk uses 4-KB sector size by default. You can enable 512-Byte sector size during creation of Ultra disk. Details are available [Using Azure ultra disks](../../disks-enable-ultra-ssd.md#deploy-an-ultra-disk512-byte-sector-size). This 512-Byte sector size is a prerequisite for IBM Db2 LUW versions lower than 11.5.
-On Windows using Storage pools for Db2 storage paths for `log_dir`, `sapdata` and `saptmp` directories, you must specify a physical disk sector size of 512-Byte. When using Windows Storage Pools, you must create the storage pools manually via command line interface using the parameter `-LogicalSectorSizeDefault`. For more information, see <https://technet.microsoft.com/itpro/powershell/windows/storage/new-storagepool>.
-
+On Windows using Storage pools for Db2 storage paths for `log_dir`, `sapdata` and `saptmp` directories, you must specify a physical disk sector size of 512-Byte. When using Windows Storage Pools, you must create the storage pools manually via command line interface using the parameter `-LogicalSectorSizeDefault`. For more information, see [New-StoragePool](/powershell/module/storage/new-storagepool).
## Recommendation on VM and disk structure for IBM Db2 deployment
virtual-machines Dbms_Guide_Maxdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms_guide_maxdb.md
This document covers several different areas to consider when deploying MaxDB, l
## Specifics for the SAP MaxDB deployments on Windows ### SAP MaxDB Version Support on Azure
-SAP currently supports SAP MaxDB version 7.9 or higher for use with SAP NetWeaver-based products in Azure. All updates for SAP MaxDB server, or JDBC and ODBC drivers to be used with SAP NetWeaver-based products are provided solely through the SAP Service Marketplace at <https://support.sap.com/swdc>.
-General information on running SAP NetWeaver on SAP MaxDB can be found at <https://www.sap.com/community/topic/maxdb.html>.
+SAP currently supports SAP MaxDB version 7.9 or higher for use with SAP NetWeaver-based products in Azure. All updates for SAP MaxDB server, or JDBC and ODBC drivers to be used with SAP NetWeaver-based products are provided solely through the [SAP Service Marketplace](https://support.sap.com/en/my-support/software-downloads.html). For more information about running SAP NetWeaver on SAP MaxDB, see [SAP MaxDB](https://www.sap.com/community/topic/maxdb.html).
-### Supported Microsoft Windows Versions and Azure VM types for SAP MaxDB DBMS
+### Supported Microsoft Windows versions and Azure VM types for SAP MaxDB DBMS
To find the supported Microsoft Windows version for SAP MaxDB DBMS on Azure, see: * [SAP Product Availability Matrix (PAM)][sap-pam]
If you configure the SAP Content Server to store files in the file system, one o
#### Other Other SAP Content Server-specific settings are transparent to Azure VMs and are described in various documents and SAP Notes:
-* <https://service.sap.com/contentserver>
-* SAP Note [1619726]
+* [SAP NetWeaver](https://service.sap.com/contentserver)
+* SAP Note [1619726]
virtual-machines Dbms_Guide_Sqlserver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms_guide_sqlserver.md
This document covers several different areas to consider when deploying SQL Serv
> [!IMPORTANT]
-> The scope of this document is the Windows version on SQL Server. SAP is not supporting the Linux version of SQL Server with any of the SAP software. The document is not discussing Microsoft Azure SQL Database, which is a Platform as a Service offer of the Microsoft Azure Platform. The discussion in this paper is about running the SQL Server product as it's known for on-premises deployments in Azure Virtual Machines, leveraging the Infrastructure as a Service capability of Azure. Database capabilities and functionality between these two offers are different and should not be mixed up with each other. See also: <https://azure.microsoft.com/services/sql-database/>
->
->
+> The scope of this document is the Windows version on SQL Server. SAP is not supporting the Linux version of SQL Server with any of the SAP software. The document is not discussing Microsoft Azure SQL Database, which is a Platform as a Service offer of the Microsoft Azure Platform. The discussion in this paper is about running the SQL Server product as it's known for on-premises deployments in Azure Virtual Machines, leveraging the Infrastructure as a Service capability of Azure. Database capabilities and functionality between these two offers are different and should not be mixed up with each other. For more information, see [Azure SQL Database](https://azure.microsoft.com/services/sql-database/).
In general, you should consider using the most recent SQL Server releases to run SAP workload in Azure IaaS. The latest SQL Server releases offer better integration into some of the Azure services and functionality. Or have changes that optimize operations in an Azure IaaS infrastructure.
For Azure M-Series VM, the latency writing into the transaction log can be reduc
### Formatting the disks For SQL Server, the NTFS block size for disks containing SQL Server data and log files should be 64 KB. There's no need to format the D:\ drive. This drive comes pre-formatted.
-To make sure that the restore or creation of databases isn't initializing the data files by zeroing the content of the files, you should make sure that the user context the SQL Server service is running in has a certain permission. Usually users in the Windows Administrator group have these permissions. If the SQL Server service is run in the user context of non-Windows Administrator user, you need to assign that user the User Right **Perform volume maintenance tasks**. See the details in this Microsoft Knowledge Base Article: <https://support.microsoft.com/kb/2574695>
+To make sure that the restore or creation of databases isn't initializing the data files by zeroing the content of the files, you should make sure that the user context the SQL Server service is running in has a certain permission. Usually users in the Windows Administrator group have these permissions. If the SQL Server service is run in the user context of non-Windows Administrator user, you need to assign that user the User Right **Perform volume maintenance tasks**. For more information, see [Database instant file initialization](/sql/relational-databases/databases/database-instant-file-initialization?view=sql-server-ver16).
### Influence of database compression In configurations where I/O bandwidth can become a limiting factor, every measure, which reduces IOPS might help to stretch the workload one can run in an IaaS scenario like Azure. Therefore, if not yet done, applying SQL Server PAGE compression is recommended by both SAP and Microsoft before uploading an existing SAP database to Azure.
For quite many SAP customers, there was no possibility to start over and introdu
## <a name="1b353e38-21b3-4310-aeb6-a77e7c8e81c8"></a>Using a SQL Server image out of the Microsoft Azure Marketplace Microsoft offers VMs in the Azure Marketplace, which already contain versions of SQL Server. For SAP customers who require licenses for SQL Server and Windows, using these images might be an opportunity to cover the need for licenses by spinning up VMs with SQL Server already installed. In order to use such images for SAP, the following considerations need to be made:
-* The SQL Server non-Evaluation versions acquire higher costs than a 'Windows-only' VM deployed from Azure Marketplace. See these articles to compare prices: <https://azure.microsoft.com/pricing/details/virtual-machines/windows/> and <https://azure.microsoft.com/pricing/details/virtual-machines/sql-server-enterprise/>.
+* The SQL Server non-evaluation versions acquire higher costs than a 'Windows-only' VM deployed from Azure Marketplace. To compare prices, see [Windows Virtual Machines Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/windows/) and [SQL Server Enterprise Virtual Machines Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/sql-server-enterprise/).
* You only can use SQL Server releases, which are supported by SAP. * The collation of the SQL Server instance, which is installed in the VMs offered in the Azure Marketplace isn't the collation SAP NetWeaver requires the SQL Server instance to run. You can change the collation though with the directions in the following section.
As Always On is supported for SAP on-premises (see SAP Note [1772688]), it's sup
Some considerations using an Availability Group Listener are:
-* Using an Availability Group Listener is only possible with Windows Server 2012 or higher as guest OS of the VM. For Windows Server 2012 you need to make sure that this patch is applied: <https://support.microsoft.com/kb/2854082>
+* Using an Availability Group Listener is only possible with Windows Server 2012 or higher as guest OS of the VM. For Windows Server 2012, ensure that the [update to enable SQL Server Availability Group Listeners on Windows Server 2008 R2 and Windows Server 2012-based Microsoft Azure virtual machines](https://support.microsoft.com/kb/2854082) has been applied.
* For Windows Server 2008 R2, this patch doesn't exist and Always On would need to be used in the same manner as Database Mirroring by specifying a failover partner in the connections string (done through the SAP default.pfl parameter dbs/mss/server - see SAP Note [965908]). * When using an Availability Group Listener, the Database VMs need to be connected to a dedicated Load Balancer. To avoid that Azure is assigning new IP addresses in cases where both VMs incidentally are shut down, one should assign static IP addresses to the network interfaces of those VMs in the Always On configuration (defining a static IP address is described in [this][virtual-networks-reserved-private-ip] article) * There are special steps required when building the WSFC cluster configuration where the cluster needs a special IP address assigned, because Azure with its current functionality would assign the cluster name the same IP address as the node the cluster is created on. This behavior means a manual step must be performed to assign a different IP address to the cluster.
virtual-machines Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/deployment-guide.md
The following flowchart shows the SAP-specific sequence of steps for deploying a
The easiest way to create a new virtual machine with an image from the Azure Marketplace is by using the Azure portal.
-1. Go to <https://portal.azure.com/#create/hub>. Or, in the Azure portal menu, select **+ New**.
+1. Navigate to [Create a resource in the Azure portal](https://portal.azure.com/#create/hub). Or, in the Azure portal menu, select **+ New**.
1. Select **Compute**, and then select the type of operating system you want to deploy. For example, Windows Server 2012 R2, SUSE Linux Enterprise Server 12 (SLES 12), Red Hat Enterprise Linux 7.2 (RHEL 7.2), or Oracle Linux 7.2. The default list view does not show all supported operating systems. Select **see all** for a full list. For more information about supported operating systems for SAP software deployment, see SAP Note [1928533]. 1. On the next page, review terms and conditions. 1. In the **Select a deployment model** box, select **Resource Manager**.
The following flowchart shows the SAP-specific sequence of steps for deploying a
The easiest way to create a new virtual machine from a Managed Disk image is by using the Azure portal. For more information on how to create a Manage Disk Image, read [Capture a managed image of a generalized VM in Azure](../../windows/capture-image-resource.md)
-1. Go to <https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.Compute%2Fimages>. Or, in the Azure portal menu, select **Images**.
+1. Navigate to [Images in the Azure portal](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.Compute%2Fimages). Or, in the Azure portal menu, select **Images**.
1. Select the Managed Disk image you want to deploy and click on **Create VM** The wizard guides you through setting the required parameters to create the virtual machine, in addition to all required resources, like network interfaces and storage accounts. Some of these parameters are:
virtual-machines Hana Vm Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/hana-vm-operations.md
As your Azure VM infrastructure is deployed, and all other preparations are done
- Install the SAP HANA main node according to SAP's documentation - When using Azure Premium Storage or Ultra disk storage with non-shared disks of `/hana/data` and `/hana/log`, add the parameter `basepath_shared = no` to the `global.ini` file. This parameter enables SAP HANA to run in scale-out without shared `/hana/data` and `/hana/log` volumes between the nodes. Details are documented in [SAP Note #2080991](https://launchpad.support.sap.com/#/notes/2080991). If you're using NFS volumes based on ANF for /hana/data and /hana/log, you don't need to make this change - After the eventual change in the global.ini parameter, restart the SAP HANA instance-- Add more worker nodes. See also <https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.00/en-US/0d9fe701e2214e98ad4f8721f6558c34.html>. Specify the internal network for SAP HANA inter-node communication during the installation or afterwards using, for example, the local hdblcm. For more detailed documentation, see also [SAP Note #2183363](https://launchpad.support.sap.com/#/notes/2183363).
+- Add more worker nodes. For more information, see [Add Hosts Using the Command-Line Interface](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.00/en-US/0d9fe701e2214e98ad4f8721f6558c34.html). Specify the internal network for SAP HANA inter-node communication during the installation or afterwards using, for example, the local hdblcm. For more detailed documentation, see [SAP Note #2183363](https://launchpad.support.sap.com/#/notes/2183363).
To set up an SAP HANA scale-out system with a standby node, see the [SUSE Linux deployment instructions](./sap-hana-scale-out-standby-netapp-files-suse.md) or the [Red Hat deployment instructions](./sap-hana-scale-out-standby-netapp-files-rhel.md).
virtual-machines High Availability Guide Rhel Glusterfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-glusterfs.md
You first need to create the virtual machines for this cluster.
1. Create an Availability Set Set max update domain 1. Create Virtual Machine 1
- Use at least RHEL 7, in this example the Red Hat Enterprise Linux 7.4 image
- <https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM>
+ Use at least RHEL 7, in this example the [Red Hat Enterprise Linux 7.4 image](https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM).
Select Availability Set created earlier 1. Create Virtual Machine 2
- Use at least RHEL 7, in this example the Red Hat Enterprise Linux 7.4 image
- <https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM>
+ Use at least RHEL 7, in this example the [Red Hat Enterprise Linux 7.4 image](https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM).
Select Availability Set created earlier 1. Add one data disk for each SAP system to both virtual machines.
virtual-machines High Availability Guide Rhel Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-pacemaker.md
The following items are prefixed with either **[A]** - applicable to all nodes,
The STONITH device uses a Service Principal to authorize against Microsoft Azure. Follow these steps to create a Service Principal.
-1. Go to <https://portal.azure.com>
+1. Go to the [Azure portal](https://portal.azure.com).
1. Open the Azure Active Directory blade Go to Properties and make a note of the Directory ID. This is the **tenant ID**. 1. Click App registrations
virtual-machines High Availability Guide Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel.md
You first need to create the virtual machines for this cluster. Afterwards, you
1. Create an Availability Set Set max update domain 1. Create Virtual Machine 1
- Use at least RHEL 7, in this example the Red Hat Enterprise Linux 7.4 image
- <https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM>
+ Use at least RHEL 7, in this example the [Red Hat Enterprise Linux 7.4 image](https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM).
Select Availability Set created earlier 1. Create Virtual Machine 2
- Use at least RHEL 7, in this example the Red Hat Enterprise Linux 7.4 image
- <https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM>
+ Use at least RHEL 7, in this example the [Red Hat Enterprise Linux 7.4 image](https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux74-ARM).
Select Availability Set created earlier 1. Add at least one data disk to both virtual machines The data disks are used for the /usr/sap/`<SAPSID`> directory
virtual-machines Planning Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/planning-guide.md
The entry point for SAP workload on Azure documentation is found at [Get started
> [!IMPORTANT]
-> Wherever possible a link to the referring SAP Installation Guides or other SAP documentation is used (Reference InstGuide-01, see <http://service.sap.com/instguides>). When it comes to the prerequisites, installation process, or details of specific SAP functionality the SAP documentation and guides should always be read carefully, as the Microsoft documents only covers specific tasks for SAP software installed and operated in a Microsoft Azure Virtual Machine.
->
->
+> Wherever possible a link to the referring SAP Installation Guides or other SAP documentation is used (Reference InstGuide-01 via the [SAP Help Portal](http://service.sap.com/instguides)). When it comes to the prerequisites, installation process, or details of specific SAP functionality the SAP documentation and guides should always be read carefully, as the Microsoft documents only covers specific tasks for SAP software installed and operated in an Azure virtual machine.
The following SAP Notes are related to the topic of SAP on Azure:
Microsoft Azure provides a network infrastructure, which allows the mapping of a
* Cross-premises Connectivity between a customer's on-premises network and the Azure network * Cross Azure Region or data center connectivity between Azure sites
-More information can be found here: <https://azure.microsoft.com/documentation/services/virtual-network/>
+For more information, see the [Virtual Network documentation](/azure/virtual-network/).
There are many different possibilities to configure name and IP resolution in Azure. There is also an Azure DNS service, which can be used instead of setting up your own DNS server. More information can be found in [this article][virtual-networks-manage-dns-in-vnet] and on [this page](https://azure.microsoft.com/services/dns/).
See example here:
Deployment of the Azure Extension for SAP (see chapter [Azure Extension for SAP][planning-guide-9.1] in this document) is only possible via PowerShell or CLI. Therefore it is mandatory to set up and configure PowerShell or CLI when deploying or administering an SAP NetWeaver system in Azure.
-As Azure provides more functionality, new PowerShell cmdlets are going to be added that requires an update of the cmdlets. Therefore it makes sense to check the Azure Download site at least once the month <https://azure.microsoft.com/downloads/> for a new version of the cmdlets. The new version is installed on top of the older version.
+As Azure provides more functionality, new PowerShell cmdlets are going to be added that requires an update of the cmdlets. Therefore, you should check the [Azure Downloads site](https://azure.microsoft.com/downloads/) at least monthly for a new version of the cmdlets. The new version is installed on top of the older version.
For a general list of Azure-related PowerShell commands check here: [Azure PowerShell documentation][azure-ps].
az disk create --source "/subscriptions/<subscription id>/resourceGroups/<resour
##### Azure Storage tools
-* <https://storageexplorer.com/>
+* [Azure Storage Explorer](/azure/vs-azure-tools-storage-manage-with-storage-explorer)
Professional editions of Azure Storage Explorers can be found here:
-* <https://www.cerebrata.com/>
-* <https://clumsyleaf.com/products/cloudxplorer>
+* [Cerebrata](https://www.cerebrata.com/)
+* [Cloud Xplorer](https://clumsyleaf.com/products/cloudxplorer)
The copy of a VHD itself within a storage account is a process, which takes only a few seconds (similar to SAN hardware creating snapshots with lazy copy and copy on write). After you have a copy of the VHD file, you can attach it to a virtual machine or use it as an image to attach copies of the VHD to virtual machines.
In the table below typical SAP communication ports are listed. Basically it is s
**) sid = SAP-System-ID
-More detailed information on ports required for different SAP products or services by SAP products can be found here
-<https://scn.sap.com/docs/DOC-17124>.
-With this document, you should be able to open dedicated ports in the VPN device necessary for specific SAP products and scenarios.
+For more information, see [TCP/IP Ports Used by SAP Applications]
+(https://scn.sap.com/docs/DOC-17124). Using this document, you can open dedicated ports in the VPN device necessary for specific SAP products and scenarios.
Other security measures when deploying VMs in such a scenario could be to create a [Network Security Group][virtual-networks-nsg] to define access rules.
How to:
> > ![Linux logo.][Logo_Linux] Linux >
-> Here are some examples of documentation about configuring network printers in Linux or including
-> a chapter regarding printing in Linux. It will work the same way in an Azure Linux VM as long as
-> the VM is part of a VPN:
->
-> * SLES <https://en.opensuse.org/SDB:Printing_via_SMB_(Samba)_Share_or_Windows_Share>
-> * RHEL or Oracle Linux <https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/system_administrators_guide/index#sec-Starting_Print_Settings_Config>
->
+> Here are some examples of documentation about configuring network printers in Linux or that include a chapter about printing in Linux. It works the same way in an Azure Linux VM as long as the VM is part of a VPN:
>
+> * [SLES - Printing via SMB (Samba) Share or Windows Share](https://en.opensuse.org/SDB:Printing_via_SMB_(Samba)_Share_or_Windows_Share)
+> * [RHEL or Oracle Linux - Starting the Print Settings Configuration Tool](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/system_administrators_guide/index#sec-Starting_Print_Settings_Config)
+ ##### USB Printer (printer forwarding) In Azure the ability of the Remote Desktop Services to provide users the access to their local printer devices in a remote session is not available.
In Azure the ability of the Remote Desktop Services to provide users the access
> ![Windows logo.][Logo_Windows] Windows >
-> More details on printing with Windows can be found here: <https://technet.microsoft.com/library/jj590748.aspx>.
->
->
+> For more information, see [Printer sharing technical details](https://technet.microsoft.com/library/jj590748.aspx>)
#### Integration of SAP Azure Systems into Correction and Transport System (TMS) in Cross-Premises
The setup of an SAP Portal in an Azure Virtual Machine does not differ from an o
A special deployment scenario by some customers is the direct exposure of the SAP Enterprise Portal to the Internet while the virtual machine host is connected to the company network via site-to-site VPN tunnel or ExpressRoute. For such a scenario, you have to make sure that specific ports are open and not blocked by firewall or network security group.
-The initial portal URI is http(s):`<Portalserver`>:5XX00/irj where the port is formed as documented by SAP in
-<https://help.sap.com/saphelp_nw70ehp1/helpdata/de/a2/f9d7fed2adc340ab462ae159d19509/frameset.htm>.
+The initial portal URI is http(s):`<Portalserver`>:5XX00/irj where the port is formed as documented by SAP in [AS Java Ports](https://help.sap.com/saphelp_nw70ehp1/helpdata/de/a2/f9d7fed2adc340ab462ae159d19509/frameset.htm).
![Endpoint configuration][planning-guide-figure-2800]
We can separate the discussion about SAP high availability in Azure into two par
and how it can be combined with Azure infrastructure HA.
-SAP High Availability in Azure has some differences compared to SAP High Availability in an on-premises physical or virtual environment. The following paper from SAP describes standard SAP High Availability configurations in virtualized environments on Windows: <https://scn.sap.com/docs/DOC-44415>. There is no sapinst-integrated SAP-HA configuration for Linux like it exists for Windows. Regarding SAP HA on-premises for Linux find more information here: <https://scn.sap.com/docs/DOC-8541>.
+SAP High Availability in Azure has some differences compared to SAP High Availability in an on-premises physical or virtual environment. The following paper from SAP describes [standard SAP High Availability configurations in virtualized environments on Windows](https://scn.sap.com/docs/DOC-44415). There is no sapinst-integrated SAP-HA configuration for Linux. For more information about SAP HA on-premises for Linux, see [SAP High Availability Partner Information](https://scn.sap.com/docs/DOC-8541).
### Azure Infrastructure High Availability
-There is currently a single-VM SLA of 99.9%. To get an idea how the availability of a single VM might look like, you can build the product of the different available Azure SLAs: <https://azure.microsoft.com/support/legal/sla/>.
+There is currently a single-VM SLA of 99.9%. To get an idea what the availability of a single VM might look like, you can build the product of the different available [Azure SLAs](https://azure.microsoft.com/support/legal/sla/).
The basis for the calculation is 30 days per month, or 43200 minutes. Therefore, 0.05% downtime corresponds to 21.6 minutes. As usual, the availability of the different services will multiply in the following way:
Dependent on the SAP configuration chosen (2-Tier or 3-Tier) there could be a ne
The offline backup would basically require a shutdown of the VM through the Azure portal and a copy of the base VM disk plus all attached disks to the VM. This would preserve a point in time image of the VM and its associated disk. It is recommended to copy the backups into a different Azure Storage Account. Hence the procedure described in chapter [Copying disks between Azure Storage Accounts][planning-guide-5.4.2] of this document would apply. - A restore of that state would consist of deleting the base VM as well as the original disks of the base VM and mounted disks, copying back the saved disks to the original Storage Account or resource group for managed disks and then redeploying the system.
-This article shows an example how to script this process in PowerShell:
-<https://www.westerndevs.com/_/azure-snapshots/>
+For more information, see [how to script this process in PowerShell](https://www.westerndevs.com/_/azure-snapshots/).
Make sure to install a new SAP license since restoring a VM backup as described above creates a new hardware key.
Other VMs within the SAP system can be backed up using Azure Virtual Machine Bac
> > ![Windows logo.][Logo_Windows] Windows >
-> Theoretically, VMs that run databases can be backed up in a consistent manner as well if the DBMS system supports the Windows VSS
-> (Volume Shadow Copy Service <https://msdn.microsoft.com/library/windows/desktop/bb968832(v=vs.85).aspx>) as, for example, SQL Server does.
-> However, be aware that based on Azure VM backups point-in-time restores of databases are not possible. Therefore, the
-> recommendation is to perform backups of databases with DBMS functionality instead of relying on Azure VM Backup.
+> Theoretically, VMs that run databases can be backed up in a consistent manner as well if the DBMS system supports the [Windows Volume Shadow Copy Service](/windows/win32/vss/volume-shadow-copy-service-portal) (VSS) as, for example, SQL Server does. However, be aware that based on Azure VM backups point-in-time restores of databases are not possible. Therefore, the recommendation is to perform backups of databases with DBMS functionality instead of relying on Azure VM Backup.
> > To get familiar with Azure Virtual Machine Backup start here: > [Back up an Azure VM from the VM settings](/../../../azure/backup/backup-azure-vms).
Other VMs within the SAP system can be backed up using Azure Virtual Machine Bac
> > ![Linux logo.][Logo_Linux] Linux >
-> There is no equivalent to Windows VSS in Linux. Therefore only file-consistent backups are possible but not
-> application-consistent backups. The SAP DBMS backup should be done using DBMS functionality. The file system
-> which includes the SAP-related data can be saved, for example, using tar as described here:
-> <https://help.sap.com/saphelp_nw70ehp2/helpdata/en/d3/c0da3ccbb04d35b186041ba6ac301f/content.htm>
->
->
+> There is no equivalent to Windows VSS in Linux. Therefore only file-consistent backups are possible but not application-consistent backups. The SAP DBMS backup should be done using DBMS functionality. The file system that includes the SAP-related data can be saved, for example, using tar as described in [Backing Up and Restoring your SAP System on UNIX](https://help.sap.com/saphelp_nw70ehp2/helpdata/en/d3/c0da3ccbb04d35b186041ba6ac301f/content.htm).
### Azure as DR site for production SAP landscapes
virtual-machines Sap Hana Availability One Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-availability-one-region.md
A health check functionality monitors the health of every VM that's hosted on an
With the host and VM monitoring provided by Azure, Azure VMs that experience host issues are automatically restarted on a healthy Azure host. >[!IMPORTANT]
->Azure service healing will not restart Linux VMs where the guest OS is in a kernel panic state. The default settings of the commonly used Linux releases, are not automatically restarting VMs or server where the Linux kernel is in panic state. Instead the default foresees to keep the OS in kernel panic state to be able to attach a kernel debugger to analyze. Azure is honoring that behavior by not automatically restarting a VM with the guest OS in a such a state. Assumption is that such occurrences are extremely rare. You could overwrite the default behavior to enable a restart of the VM. To change the default behavior enable the parameter 'kernel.panic' in /etc/sysctl.conf. The time you set for this parameter is in seconds. Common recommended values are to wait for 20-30 seconds before triggering the reboot through this parameter. See also <https://gitlab.com/procps-ng/procps/blob/master/sysctl.conf>.
+>Azure service healing will not restart Linux VMs where the guest OS is in a kernel panic state. The default settings of the commonly used Linux releases, are not automatically restarting VMs or server where the Linux kernel is in panic state. Instead the default foresees to keep the OS in kernel panic state to be able to attach a kernel debugger to analyze. Azure is honoring that behavior by not automatically restarting a VM with the guest OS in a such a state. Assumption is that such occurrences are extremely rare. You could overwrite the default behavior to enable a restart of the VM. To change the default behavior enable the parameter 'kernel.panic' in /etc/sysctl.conf. The time you set for this parameter is in seconds. Common recommended values are to wait for 20-30 seconds before triggering the reboot through this parameter. For more information, see [sysctl.conf](https://gitlab.com/procps-ng/procps/blob/master/sysctl.conf).
The second feature that you rely on in this scenario is the fact that the HANA service that runs in a restarted VM starts automatically after the VM reboots. You can set up [HANA service auto-restart](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.01/en-US/cf10efba8bea4e81b1dc1907ecc652d3.html) through the watchdog services of the various HANA services.
virtual-machines Sap Hana High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-rhel.md
To deploy the template, follow these steps:
1. Create a load balancer (internal). We recommend [standard load balancer](../../../load-balancer/load-balancer-overview.md). * Select the virtual network created in step 2. 1. Create virtual machine 1.
- Use at least Red Hat Enterprise Linux 7.4 for SAP HANA. This example uses the Red Hat Enterprise Linux 7.4 for SAP HANA image <https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux75forSAP-ARM>
+ Use at least Red Hat Enterprise Linux 7.4 for SAP HANA. This example uses the [Red Hat Enterprise Linux 7.4 for SAP HANA image](https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux75forSAP-ARM).
Select the availability set created in step 3. 1. Create virtual machine 2.
- Use at least Red Hat Enterprise Linux 7.4 for SAP HANA. This example uses the Red Hat Enterprise Linux 7.4 for SAP HANA image <https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux75forSAP-ARM>
+ Use at least Red Hat Enterprise Linux 7.4 for SAP HANA. This example uses the [Red Hat Enterprise Linux 7.4 for SAP HANA image](https://portal.azure.com/#create/RedHat.RedHatEnterpriseLinux75forSAP-ARM).
Select the availability set created in step 3. 1. Add data disks.
The steps in this section use the following prefixes:
1. **[A]** RHEL for HANA configuration
- Configure RHEL as described in <https://access.redhat.com/solutions/2447641> and in the following SAP notes:
+ Configure RHEL as described in the following notes:
+ - [2447641 - Additional packages required for installing SAP HANA SPS 12 on RHEL 7.X](https://access.redhat.com/solutions/2447641)
- [2292690 - SAP HANA DB: Recommended OS settings for RHEL 7](https://launchpad.support.sap.com/#/notes/2292690) - [2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8](https://launchpad.support.sap.com/#/notes/2777782) - [2455582 - Linux: Running SAP applications compiled with GCC 6.x](https://launchpad.support.sap.com/#/notes/2455582)
The steps in this section use the following prefixes:
1. **[A]** Install the SAP HANA
- To install SAP HANA System Replication, follow <https://access.redhat.com/articles/3004101>.
+ To install SAP HANA System Replication, see [Automating SAP HANA Scale-Up System Replication using the RHEL HA Add-On](https://access.redhat.com/articles/3004101).
* Run the **hdblcm** program from the HANA DVD. Enter the following values at the prompt: * Choose installation: Enter **1**.
virtual-machines Sap Hana Scale Out Standby Netapp Files Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-rhel.md
[anf-azure-doc]:/azure/azure-netapp-files/ [anf-avail-matrix]:https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=all -
-[2205917]:https://launchpad.support.sap.com/#/notes/2205917
-[1944799]:https://launchpad.support.sap.com/#/notes/1944799
-[1928533]:https://launchpad.support.sap.com/#/notes/1928533
-[2015553]:https://launchpad.support.sap.com/#/notes/2015553
-[2178632]:https://launchpad.support.sap.com/#/notes/2178632
-[2191498]:https://launchpad.support.sap.com/#/notes/2191498
-[2243692]:https://launchpad.support.sap.com/#/notes/2243692
-[1984787]:https://launchpad.support.sap.com/#/notes/1984787
-[1999351]:https://launchpad.support.sap.com/#/notes/1999351
-[1410736]:https://launchpad.support.sap.com/#/notes/1410736
-[1900823]:https://launchpad.support.sap.com/#/notes/1900823
-[2292690]:https://launchpad.support.sap.com/#/notes/2292690
-[2455582]:https://launchpad.support.sap.com/#/notes/2455582
-[2593824]:https://launchpad.support.sap.com/#/notes/2593824
-[2009879]:https://launchpad.support.sap.com/#/notes/2009879
-
-[sap-swcenter]:https://support.sap.com/en/my-support/software-downloads.html
+[2205917]: https://launchpad.support.sap.com/#/notes/2205917
+[1944799]: https://launchpad.support.sap.com/#/notes/1944799
+[1928533]: https://launchpad.support.sap.com/#/notes/1928533
+[2015553]: https://launchpad.support.sap.com/#/notes/2015553
+[2178632]: https://launchpad.support.sap.com/#/notes/2178632
+[2191498]: https://launchpad.support.sap.com/#/notes/2191498
+[2243692]: https://launchpad.support.sap.com/#/notes/2243692
+[1984787]: https://launchpad.support.sap.com/#/notes/1984787
+[1999351]: https://launchpad.support.sap.com/#/notes/1999351
+[1410736]: https://launchpad.support.sap.com/#/notes/1410736
+[1900823]: https://launchpad.support.sap.com/#/notes/1900823
+[2292690]: https://launchpad.support.sap.com/#/notes/2292690
+[2455582]: https://launchpad.support.sap.com/#/notes/2455582
+[2593824]: https://launchpad.support.sap.com/#/notes/2593824
+[2009879]: https://launchpad.support.sap.com/#/notes/2009879
+[sap-swcenter]: https://support.sap.com/en/my-support/software-downloads.html
+
+[2447641]: https://access.redhat.com/solutions/2447641
[sap-hana-ha]:sap-hana-high-availability.md [nfs-ha]:high-availability-guide-suse-nfs.md
Configure and prepare your OS by doing the following steps:
6. **[A]** Red Hat for HANA configuration.
- Configure RHEL as described in SAP Note [2292690], [2455582], [2593824] and <https://access.redhat.com/solutions/2447641>.
+ Configure RHEL as described in SAP Note [2292690], [2455582], [2593824], and Red Hat note [2447641].
> [!NOTE] > If installing HANA 2.0 SP04 you will be required to install package `compat-sap-c++-7` as described in SAP note [2593824], before you can install SAP HANA.
virtual-machines Vm Extension For Sap New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/vm-extension-for-sap-new.md
This check makes sure that all performance metrics that appear inside your SAP a
### Run the readiness check on a Windows VM 1. Sign in to the Azure virtual machine (using an admin account is not necessary).
-1. Open a web browser and navigate to http://127.0.0.1:11812/azure4sap/metrics
+1. Open a web browser and navigate to `http://127.0.0.1:11812/azure4sap/metrics`.
1. The browser should display or download an XML file that contains the monitoring data of your virtual machine. If that is not the case, make sure that the Azure Extension for SAP is installed.
-1. Check the content of the XML file. The XML file that you can access at http://127.0.0.1:11812/azure4sap/metrics contains all populated Azure performance counters for SAP. It also contains a summary and health indicator of the status of Azure Extension for SAP.
+1. Check the content of the XML file. The XML file that you can access at `http://127.0.0.1:11812/azure4sap/metrics` contains all populated Azure performance counters for SAP. It also contains a summary and health indicator of the status of Azure Extension for SAP.
1. Check the value of the **Provider Health Description** element. If the value is not **OK**, follow the instructions in chapter [Health checks][health-check]. ### Run the readiness check on a Linux VM
virtual-network Virtual Network Peering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-peering-overview.md
Virtual network peering enables you to seamlessly connect two or more [Virtual N
Azure supports the following types of peering:
-* **Virtual network peering**: Connect virtual networks within the same Azure region.
+* **Virtual network peering**: Connecting virtual networks within the same Azure region.
* **Global virtual network peering**: Connecting virtual networks across Azure regions. The benefits of using virtual network peering, whether local or global, include:
web-application-firewall Application Gateway Waf Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-waf-configuration.md
Exclusions can be configured to apply to a specific set of WAF rules, to ruleset
### Per-rule exclusions
-You can configure an exclusion for a specific rule, group of rules, or rule set. You must specify the rule or rules that the exclusion applies to. You also need to specify the request attribute that should be excluded from the WAF evaluation.
+You can configure an exclusion for a specific rule, group of rules, or rule set. You must specify the rule or rules that the exclusion applies to. You also need to specify the request attribute that should be excluded from the WAF evaluation. To exclude a complete group of rules, only provide the `ruleGroupName` parameter, the `rules` parameter is only useful when you want to limit the exclusion to specific rules of a group.
Per-rule exclusions are available when you use the OWASP (CRS) ruleset version 3.2 or later.
resource wafPolicy 'Microsoft.Network/ApplicationGatewayWebApplicationFirewallPo
ruleGroups: [ { ruleGroupName: 'REQUEST-942-APPLICATION-ATTACK-SQLI'
+ rules: [
+ {
+ ruleId: '942150'
+ }
+ {
+ ruleId: '942410'
+ }
+ ]
} ] }
resource wafPolicy 'Microsoft.Network/ApplicationGatewayWebApplicationFirewallPo
"ruleSetVersion": "3.2", "ruleGroups": [ {
- "ruleGroupName": "REQUEST-942-APPLICATION-ATTACK-SQLI"
+ "ruleGroupName": "REQUEST-942-APPLICATION-ATTACK-SQLI",
+ "rules": [
+ {
+ "ruleId": "942150"
+ },
+ {
+ "ruleId": "942410"
+ }
+ ]
} ] }
web-application-firewall Tutorial Restrict Web Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/tutorial-restrict-web-traffic-cli.md
description: Learn how to restrict web traffic with a Web Application Firewall o
Previously updated : 03/29/2021 Last updated : 06/23/2022
az network public-ip create \
--sku Standard ```
-## Create an application gateway with a WAF
+## Create an application gateway with a WAF policy
You can use [az network application-gateway create](/cli/azure/network/application-gateway) to create the application gateway named *myAppGateway*. When you create an application gateway using the Azure CLI, you specify configuration information, such as capacity, sku, and HTTP settings. The application gateway is assigned to *myAGSubnet* and *myAGPublicIPAddress*. ```azurecli-interactive
+az network application-gateway waf-policy create \
+ --name waf-pol \
+ --resource-group myResourceGroupAG \
+ --type OWASP \
+ --version 3.2
+ az network application-gateway create \ --name myAppGateway \ --location eastus \
az network application-gateway create \
--frontend-port 80 \ --http-settings-port 80 \ --http-settings-protocol Http \
- --public-ip-address myAGPublicIPAddress
-
-az network application-gateway waf-config set \
- --enabled true \
- --gateway-name myAppGateway \
- --resource-group myResourceGroupAG \
- --firewall-mode Detection \
- --rule-set-version 3.0
+ --public-ip-address myAGPublicIPAddress \
+ --waf-policy waf-pol \
+ --priority 1
``` It may take several minutes for the application gateway to be created. After the application gateway is created, you can see these new features of it: