Updates from: 07/23/2021 03:05:27
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Threat Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/threat-management.md
When testing the smart lockout feature, use a distinctive pattern for each passw
When the smart lockout threshold is reached, you'll see the following message while the account is locked: **Your account is temporarily locked to prevent unauthorized use. Try again later**. The error messages can be [localized](localization-string-ids.md#sign-up-or-sign-in-error-messages).
+> [!NOTE]
+> When you test smart lockout, your sign-in requests might be handled by different datacenters due to the geo-distributed and load-balanced nature of the Azure AD authentication service. In that scenario, because each Azure AD datacenter tracks lockout independently, it might take more than your defined lockout threshold number of attempts to cause a lockout. A user has a maximum of (threshold_limit * datacenter_count) number of bad attempts before being completely locked out.
+ ## Viewing locked-out accounts To obtain information about locked-out accounts, you can check the Active Directory [sign-in activity report](../active-directory/reports-monitoring/concept-sign-ins.md). Under **Status**, select **Failure**. Failed sign-in attempts with a **Sign-in error code** of `50053` indicate a locked account:
active-directory Concept Mfa Licensing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-mfa-licensing.md
Previously updated : 07/13/2021 Last updated : 07/22/2021
The following table details the different ways to get Azure AD Multi-Factor Auth
## Feature comparison of versions
-The following table provides a list of the features that are available in the various versions of Azure AD Multi-Factor Authentication. Plan out your needs for securing user authentication, then determine which approach meets those requirements. For example, although Azure AD Free provides security defaults that provide Azure AD Multi-Factor Authentication, only the mobile authenticator app can be used for the authentication prompt, not a phone call or SMS. This approach may be a limitation if you can't ensure the mobile authentication app is installed on a user's personal device.
+The following table provides a list of the features that are available in the various versions of Azure AD Multi-Factor Authentication. Plan out your needs for securing user authentication, then determine which approach meets those requirements. For example, although Azure AD Free provides security defaults that provide Azure AD Multi-Factor Authentication, only the mobile authenticator app can be used for the authentication prompt, not a phone call or SMS. This approach may be a limitation if you can't ensure the mobile authentication app is installed on a user's personal device. See [Azure AD Free tier](#azure-ad-free-tier) later in this topic for more details.
-| Feature | Azure AD Free - Security defaults | Azure AD Free - Azure AD Global Administrators | Office 365 | Azure AD Premium P1 or P2 |
+| Feature | Azure AD Free - Security defaults (enabled for all users) | Azure AD Free - Global Administrators only | Office 365 | Azure AD Premium P1 or P2 |
| |::|::|::|::| | Protect Azure AD tenant admin accounts with MFA | ΓùÅ | ΓùÅ (*Azure AD Global Administrator* accounts only) | ΓùÅ | ΓùÅ | | Mobile app as a second factor | ΓùÅ | ΓùÅ | ΓùÅ | ΓùÅ |
active-directory Concept Sspr Licensing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-sspr-licensing.md
To reduce help desk calls and loss of productivity when a user can't sign in to
This article details the different ways that self-service password reset can be licensed and used. For specific details about pricing and billing, see the [Azure AD pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
+Although some unlicensed users may technically be able to access SSPR, a license is required for any user that you intend to benefit from the service.
+
+> [!NOTE]
+> Some tenant services are not currently capable of limiting benefits to specific users. Efforts should be taken to limit the service benefits to licensed users. This will help avoid potential service disruption to your organization once targeting capabilities are available.
+ ## Compare editions and features The following table outlines the different SSPR scenarios for password change, reset, or on-premises writeback, and which SKUs provide the feature.
The following table outlines the different SSPR scenarios for password change, r
| **Hybrid user password change or reset with on-prem writeback**<br />When a user in Azure AD that's synchronized from an on-premises directory using Azure AD Connect wants to change or reset their password and also write the new password back to on-prem. | | | ΓùÅ | ΓùÅ | > [!WARNING]
-> Standalone Microsoft 365 Basic and Standard licensing plans don't support SSPR with on-premises writeback. The on-premises writeback feature requires Azure AD Premium P1, Premium P2, or Microsoft 365 Business Premium.
+> Standalone Microsoft 365 Basic and Standard licensing plans don't support SSPR with on-premises writeback. The on-premises writeback feature requires Azure AD Premium P1, Premium P2, or Microsoft 365 Business Premium.
For additional licensing information, including costs, see the following pages:
active-directory Howto Mfa Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-getstarted.md
Previously updated : 07/19/2021 Last updated : 07/22/2021
Risk policies include:
- [Require a password change for users that are high-risk](../identity-protection/howto-identity-protection-configure-risk-policies.md#enable-policies) - [Require MFA for users with medium or high sign-in risk](../identity-protection/howto-identity-protection-configure-risk-policies.md#enable-policies)
+### Convert users from per-user MFA to Conditional Access based MFA
+
+If your users were enabled using per-user enabled and enforced Azure AD Multi-Factor Authentication the following PowerShell can assist you in making the conversion to Conditional Access based Azure AD Multi-Factor Authentication.
+
+Run this PowerShell in an ISE window or save as a `.PS1` file to run locally.
+
+```PowerShell
+# Sets the MFA requirement state
+function Set-MfaState {
+ [CmdletBinding()]
+ param(
+ [Parameter(ValueFromPipelineByPropertyName=$True)]
+ $ObjectId,
+ [Parameter(ValueFromPipelineByPropertyName=$True)]
+ $UserPrincipalName,
+ [ValidateSet("Disabled","Enabled","Enforced")]
+ $State
+ )
+ Process {
+ Write-Verbose ("Setting MFA state for user '{0}' to '{1}'." -f $ObjectId, $State)
+ $Requirements = @()
+ if ($State -ne "Disabled") {
+ $Requirement =
+ [Microsoft.Online.Administration.StrongAuthenticationRequirement]::new()
+ $Requirement.RelyingParty = "*"
+ $Requirement.State = $State
+ $Requirements += $Requirement
+ }
+ Set-MsolUser -ObjectId $ObjectId -UserPrincipalName $UserPrincipalName `
+ -StrongAuthenticationRequirements $Requirements
+ }
+}
+# Disable MFA for all users
+Get-MsolUser -All | Set-MfaState -State Disabled
+```
+ ## Plan user session lifetime When planning your MFA deployment, itΓÇÖs important to think about how frequently you would like to prompt your users. Asking users for credentials often seems like a sensible thing to do, but it can backfire. If users are trained to enter their credentials without thinking, they can unintentionally supply them to a malicious credential prompt.
active-directory Howto Mfa Userstates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-userstates.md
Previously updated : 07/19/2021 Last updated : 07/22/2021
To change the per-user Azure AD Multi-Factor Authentication state for a user, co
After you enable users, notify them via email. Tell the users that a prompt is displayed to ask them to register the next time they sign in. Also, if your organization uses non-browser apps that don't support modern authentication, they need to create app passwords. For more information, see the [Azure AD Multi-Factor Authentication end-user guide](../user-help/multi-factor-authentication-end-user-first-time.md) to help them get started.
+### Convert users from per-user MFA to Conditional Access based MFA
+
+If your users were enabled using per-user enabled and enforced Azure AD Multi-Factor Authentication the following PowerShell can assist you in making the conversion to Conditional Access based Azure AD Multi-Factor Authentication.
+
+Run this PowerShell in an ISE window or save as a `.PS1` file to run locally.
+
+```PowerShell
+# Sets the MFA requirement state
+function Set-MfaState {
+ [CmdletBinding()]
+ param(
+ [Parameter(ValueFromPipelineByPropertyName=$True)]
+ $ObjectId,
+ [Parameter(ValueFromPipelineByPropertyName=$True)]
+ $UserPrincipalName,
+ [ValidateSet("Disabled","Enabled","Enforced")]
+ $State
+ )
+ Process {
+ Write-Verbose ("Setting MFA state for user '{0}' to '{1}'." -f $ObjectId, $State)
+ $Requirements = @()
+ if ($State -ne "Disabled") {
+ $Requirement =
+ [Microsoft.Online.Administration.StrongAuthenticationRequirement]::new()
+ $Requirement.RelyingParty = "*"
+ $Requirement.State = $State
+ $Requirements += $Requirement
+ }
+ Set-MsolUser -ObjectId $ObjectId -UserPrincipalName $UserPrincipalName `
+ -StrongAuthenticationRequirements $Requirements
+ }
+}
+# Disable MFA for all users
+Get-MsolUser -All | Set-MfaState -State Disabled
+```
+ ## Next steps To configure Azure AD Multi-Factor Authentication settings, see [Configure Azure AD Multi-Factor Authentication settings](howto-mfa-mfasettings.md).
active-directory Howto Password Ban Bad On Premises Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-password-ban-bad-on-premises-deploy.md
To install the Azure AD Password Protection proxy service, complete the followin
```powershell Import-Module AzureADPasswordProtection ```
+
+ > [!WARNING]
+ > The 64 bit version of PowerShell must be used. Certain cmdlets may not work with PowerShell (x86).
1. To check that the Azure AD Password Protection proxy service is running, use the following PowerShell command:
The `Get-AzureADPasswordProtectionDCAgent` cmdlet may be used to query the softw
## Next steps
-Now that you've installed the services that you need for Azure AD Password Protection on your on-premises servers, [enable on-prem Azure AD Password Protection in the Azure portal](howto-password-ban-bad-on-premises-operations.md) to complete your deployment.
+Now that you've installed the services that you need for Azure AD Password Protection on your on-premises servers, [enable on-prem Azure AD Password Protection in the Azure portal](howto-password-ban-bad-on-premises-operations.md) to complete your deployment.
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-net-token-cache-serialization.md
Here are examples of possible distributed caches:
// or use a distributed Token Cache by adding services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApp(Configuration)
- .EnableTokenAcquisitionToCallDownstreamApi(new string[] { scopesToRequest })
+ .EnableTokenAcquisitionToCallDownstreamApi(new string[] { scopesToRequest }
+ .AddDistributedTokenCaches();
// and then choose your implementation
active-directory Entitlement Management Access Reviews Review Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-reviews-review-access.md
Title: Review access of an access package in Azure AD entitlement management
description: Learn how to complete an access review of entitlement management access packages in Azure Active Directory access reviews (Preview). documentationCenter: ''-+ editor:
na
ms.devlang: na Previously updated : 06/18/2020- Last updated : 07/22/2021+
Azure AD entitlement management simplifies how enterprises manage access to grou
## Prerequisites
-To review users' active access package assignments, you must meet the prerequisites to do an access review:
+To review users' active access package assignments, the creator of a review must satisfy these prerequisites:
- Azure AD Premium P2 - Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager For more information, see [License requirements](entitlement-management-overview.md#license-requirements).
+>[!NOTE]
+>The reviewer can be anyone the creator of a review selects (group owner, manager of user, the user themselves, or any selected user or group).
+ ## Open the access review
Once you open the access review, you will see the names of users for which you n
If there are multiple reviewers, the last submitted response is recorded. Consider an example where an administrator designates two reviewers ΓÇô Alice and Bob. Alice opens the review first and approves access. Before the review ends, Bob opens the review and denies access. In this case, the last deny access decision gets recorded. >[!NOTE]
->If a user is denied access, they aren't removed from the access package immediately. The user will be removed from the access package when the review ends, or an administrator ends the review.
+>If a user is denied access, they aren't removed from the access package immediately. The user will be removed from the access package when the review ends, or an administrator ends the review. However, when a user is approved access, the approval is instantaneous and granted even if the review period is still open.
### Approve or deny access using the system-generated recommendations
active-directory Linkedin Learning Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/linkedin-learning-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure LinkedIn Learning for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to LinkedIn Learning.
+
+documentationcenter: ''
+
+writer: Zhchia
++
+ms.assetid: 21e2f470-4eb1-472c-adb9-4203c00300be
+++
+ na
+ms.devlang: na
+ Last updated : 06/30/2020+++
+# Tutorial: Configure LinkedIn Learning for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both LinkedIn Learning and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [LinkedIn Learning](https://learning.linkedin.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in LinkedIn Learning
+> * Remove users in LinkedIn Learning when they do not require access anymore
+> * Keep user attributes synchronized between Azure AD and LinkedIn Learning
+> * Provision groups and group memberships in LinkedIn Learning
+> * [Single sign-on](linkedinlearning-tutorial.md) to LinkedIn Learning (recommended)
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (e.g. Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* Approval and SCIM enabled for LinkedIn Learning (contact by email).
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+3. Determine what data to [map between Azure AD and LinkedIn Learning](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure LinkedIn Learning to support provisioning with Azure AD
+1. Sign into [LinkedIn Learning Settings](https://www.linkedin.com/learning-admin/settings/global). Select **SCIM Setup** then select **Add new SCIM configuration**.
+
+ ![SCIM Setup configuration](./media/linkedin-learning-provisioning-tutorial/learning-scim-settings.png)
+
+2. Enter a name for the configuration, and set **Auto-assign licenses** to On. Then click **Generate token**.
+
+ ![SCIM configuration name](./media/linkedin-learning-provisioning-tutorial/learning-scim-configuration.png)
+
+3. After the configuration is created, an **Access token** should be generated. Keep this copied for later.
+
+ ![SCIM access token](./media/linkedin-learning-provisioning-tutorial/learning-scim-token.png)
+
+4. You may reissue any existing configurations (which will generate a new token) or remove them.
+
+## Step 3. Add LinkedIn Learning from the Azure AD application gallery
+
+Add LinkedIn Learning from the Azure AD application gallery to start managing provisioning to LinkedIn Learning. If you have previously setup LinkedIn Learning for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to LinkedIn Learning, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to LinkedIn Learning
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for LinkedIn Learning in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+2. In the applications list, select **LinkedIn Learning**.
+
+ ![The LinkedIn Learning link in the Applications list](common/all-applications.png)
+
+3. Select the **Provisioning** tab.
+
+ ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
+
+4. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
+
+5. Under the **Admin Credentials** section, input `https://api.linkedin.com/scim` in **Tenant URL**. Input the access token value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to LinkedIn Learning. If the connection fails, ensure your LinkedIn Learning account has Admin permissions and try again.
+
+ ![Screenshot shows the Admin Credentials dialog box, where you can enter your Tenant U R L and Secret Token.](./media/linkedin-learning-provisioning-tutorial/provisioning.png)
+
+6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+7. Select **Save**.
+
+8. Under the **Mappings** section, select **Provision Azure Active Directory Users**.
+
+9. Review the user attributes that are synchronized from Azure AD to LinkedIn Learning in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in LinkedIn Learning for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the LinkedIn Learning API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |externalId|String|&check;|
+ |userName|String|
+ |name.givenName|String|
+ |name.familyName|String|
+ |displayName|String|
+ |addresses[type eq "work"].locality|String|
+ |title|String|
+ |emails[type eq "work"].value|String|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference|
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
+
+10. Under the **Mappings** section, select **Provision Azure Active Directory Groups**.
+
+11. Review the group attributes that are synchronized from Azure AD to LinkedIn Learning in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in LinkedIn Learning for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|
+ ||||
+ |displayName|String|&check;|
+ |members|Reference|
+ |externalId|String|
+
+12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+13. To enable the Azure AD provisioning service for LinkedIn Learning, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+14. Define the users and/or groups that you would like to provision to LinkedIn Learning by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+15. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## Additional resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Workplacebyfacebook Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/workplacebyfacebook-provisioning-tutorial.md
Previously updated : 04/28/2020 Last updated : 07/22/2021
Once you've configured provisioning, use the following resources to monitor your
## Troubleshooting tips * If you see a user unsuccessfully created and there is an audit log event with the code "1789003" it means that the user is from an unverified domain.
+* There are cases where users get an error 'ERROR: Missing Email field: You must provide an email Error returned from Facebook: Processing of the HTTP request resulted in an exception. Please see the HTTP response returned by the 'Response' property of this exception for details. This operation was retried 0 times. It will be retried again after this date'. This error is due to customers mapping mail, rather than userPrincipalName, to Facebook email, yet some users don't have a mail attribute.
+To avoid the errors and successfully provision the failed users to Workplace from Facebook, modify the attribute mapping to the Workplace from Facebook email attribute to Coalesce([mail],[userPrincipalName]) or unassign the user from Workplace from Facebook, or provision an email address for the user.
+ ## Change log * 09/10/2020 - Added support for enterprise attributes "division", "organization", "costCenter" and "employeeNumber". Added support for custom attributes "startDate", "auth_method" and "frontline"
+* 07/22/2021 - Updated the troubleshooting tips for customers with a mapping of mail to Facebook mail yet some users don't have a mail attribute
## Additional resources
Once you've configured provisioning, use the following resources to monitor your
## Next steps
-* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
aks Kubernetes Walkthrough Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/kubernetes-walkthrough-portal.md
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.c
![Create AKS cluster - provide basic information](media/kubernetes-walkthrough-portal/create-cluster-basics.png)
+ > [!NOTE]
+ > You can change the preset configuration when creating your cluster by selecting *View all preset configurations* and choosing a different option.
+ > ![Create AKS cluster - portal preset options](media/kubernetes-walkthrough-portal/cluster-preset-options.png)
+ 4. Select **Next: Node pools** when complete. 5. Keep the default **Node pools** options. At the bottom of the screen, click **Next: Authentication**.
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/use-azure-ad-pod-identity.md
az extension add --name aks-preview
az extension update --name aks-preview ```
-## Create an AKS cluster with Azure CNI
+## Create an AKS cluster with Azure Container Networking Interface (CNI)
> [!NOTE] > This is the default recommended configuration
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/monitor-instances-health-check.md
Title: Monitor the health of App Service instances description: Learn how to monitor the health of App Service instances using Health check.
-keywords: azure app service, web app, health check, route traffic, healthy instances, path, monitoring,
+keywords: azure app service, web app, health check, route traffic, healthy instances, path, monitoring, remove faulty instances, unhealthy instances, remove workers
Previously updated : 12/03/2020 Last updated : 07/19/2021 -+ # Monitor App Service instances using Health check
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/tutorial-python-postgresql-app.md
In this tutorial, you use the Azure CLI to complete the following tasks:
> * View diagnostic logs > * Manage the web app in the Azure portal
-You can also use the [Azure portal version of this tutorial](/azure/developer/python/tutorial-python-postgresql-app-portal&pivots=postgres-single-server).
+You can also use the [Azure portal version of this tutorial](/azure/developer/python/tutorial-python-postgresql-app-portal?pivots=postgres-single-server).
:::zone-end
In this tutorial, you use the Azure CLI to complete the following tasks:
> * View diagnostic logs > * Manage the web app in the Azure portal
-You can also use the [Azure portal version of this tutorial](/azure/developer/python/tutorial-python-postgresql-app-portal&pivots=postgres-flexible-server).
+You can also use the [Azure portal version of this tutorial](/azure/developer/python/tutorial-python-postgresql-app-portal?pivots=postgres-flexible-server).
:::zone-end
Learn how to map a custom DNS name to your app:
Learn how App Service runs a Python app: > [!div class="nextstepaction"]
-> [Configure Python app](configure-language-python.md)
+> [Configure Python app](configure-language-python.md)
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-diagnostics.md
You can also connect to your storage account and retrieve the JSON log entries f
#### Analyzing Access logs through GoAccess
-We have published a Resource Manager template that installs and runs the popular [GoAccess](https://goaccess.io/) log analyzer for Application Gateway Access Logs. GoAccess provides valuable HTTP traffic statistics such as Unique Visitors, Requested Files, Hosts, Operating Systems, Browsers, HTTP Status codes and more. For more details, please see the [Readme file in the Resource Manager template folder in GitHub](https://aka.ms/appgwgoaccessreadme).
+We have published a Resource Manager template that installs and runs the popular [GoAccess](https://goaccess.io/) log analyzer for Application Gateway Access Logs. GoAccess provides valuable HTTP traffic statistics such as Unique Visitors, Requested Files, Hosts, Operating Systems, Browsers, HTTP Status codes and more. For more details, please see the [Readme file in the Resource Manager template folder in GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/application-gateway-logviewer-goaccess).
## Next steps
application-gateway Application Gateway Key Vault Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/application-gateway-key-vault-common-errors.md
+
+ Title: 'Common Key Vault errors in Application Gateway'
+
+description: This article helps identifying Key Vault-related issues and resolve them for smooth operations of Application Gateway.
++++ Last updated : 07/12/2021++++
+# Common Key Vault errors in Application Gateway
+
+This troubleshooting guide will help you to understand the details of the Key Vault error codes, their cause and the associated Key Vault resource which is causing the problem. It also includes step by step guide for the resolution of such misconfigurations.
+
+> [!NOTE]
+> The logs for Key Vault Diagnostics in Application Gateway are generated at every four-hour interval. Therefore, in some cases, you may have to wait for the logs to be refreshed, if the diagnostic continues to show the error after you have fixed the configuration.
+
+> [!TIP]
+> We recommend using a version-less Secret Identifier. This way, Application Gateway will automatically rotate the certificate, if a newer version is available in the Key Vault. An example of Secret URI without version `https://myvault.vault.azure.net/secrets/mysecret/`.
+
+### List of error codes and their details
+
+[comment]: # (Error Code 1)
+#### **1) Error Code:** UserAssignedIdentityDoesNotHaveGetPermissionOnKeyVault
+
+**Description:** The associated User-Assigned Managed Identity doesn't have GET permission.
+
+**Resolution:** Configure Key Vault's Access Policy to grant the associated User-Assigned Managed Identity GET permissions on Secrets.
+1. Navigate to the linked Key Vault in Azure portal
+2. Open the Access Policies blade
+3. Select "Vault Access policy" for Permission model
+4. Select "Get" permission for Secret for the given User-Assigned Managed Identity
+5. Save the configuration
+++
+For complete guide on Key Vault's Access policy, refer to this [article](../key-vault/general/assign-access-policy-portal.md)
+</br></br>
+++
+[comment]: # (Error Code 2)
+#### **2) Error Code:** SecretDisabled
+
+**Description:** The associated Certificate has been disabled in Key Vault.
+
+**Resolution:** Re-enable the certificate version that is currently in use for Application Gateway.
+1. Navigate to the linked Key Vault in Azure portal
+2. Open the Certificates blade
+3. Click on the required certificate name, and then the disabled version
+4. Use the toggle on the management page to enable that certificate version
+
+</br></br>
++
+[comment]: # (Error Code 3)
+#### **3) Error Code:** SecretDeletedFromKeyVault
+
+**Description:** The associated Certificate has been deleted from Key Vault.
+
+**Resolution:** The deleted certificate object within a Key Vault can be restored using its Soft-Delete recovery feature. To recovery a deleted certificate,
+1. Navigate to the linked Key Vault in Azure portal
+2. Open the Certificates blade
+3. Use "Managed Deleted Certificates" tab to recover a deleted certificate.
+
+On the other hand, if a certificate object is permanently deleted, you will need to create a new certificate and update the Application Gateway with the new certificate details. When configuring through Azure CLI or Azure PowerShell, it is recommended to use a version-less secret identifier URI to allow instances to retrieve a renewed version of the certificate, if it exists.
+
+</br></br>
++
+[comment]: # (Error Code 4)
+#### **4) Error Code:** UserAssignedManagedIdentityNotFound
+
+**Description:** The associated User-Assigned Managed Identity has been deleted.
+
+**Resolution:** Follow the guidance below to resolve this issue.
+1. Re-create a Managed Identity with the same name that was used earlier and under the same Resource Group. You can refer to resource Activity Logs for details.
+2. Once created, assign that new Managed Identity the Reader role, at a minimum, under Application Gateway - Access Control (IAM).
+3. Finally, navigate to the desired Key Vault resource and set its Access Policies to grant GET Secret Permissions for this new Managed Identity.
+
+[More information](./key-vault-certs.md#how-integration-works)
+</br></br>
+
+[comment]: # (Error Code 5)
+#### **5) Error Code:** KeyVaultHasRestrictedAccess
+
+**Description:** Restricted Network setting for Key Vault.
+
+**Resolution:** You will encounter this issue upon enabling Key Vault Firewall for restricted access. You can still configure your Application Gateway in a restricted network of Key Vault in the following manner.
+1. Under Key VaultΓÇÖs Networking blade
+2. Choose Private endpoint and selected networks in "Firewall and Virtual Networks" tab
+3. Then using Virtual Networks, add your Application GatewayΓÇÖs virtual network and Subnet. During the process also configure ΓÇÿMicrosoft.KeyVault' service endpoint by selecting its checkbox.
+4. Finally, select ΓÇ£YesΓÇ¥ to allow Trusted Services to bypass Key VaultΓÇÖs firewall.
+
+</br></br>
++
+[comment]: # (Error Code 6)
+#### **6) Error Code:** KeyVaultSoftDeleted
+
+**Description:** The associated Key Vault is in soft-delete state.
+
+**Resolution:** Recovering a soft-deleted Key Vault is quite easy. In Azure portal, go to Key Vaults service page.
+
+</br></br>
+Click on Managed Deleted Vaults tab. From here, you can find the deleted Key Vault resource and recover it.
+</br></br>
++
+[comment]: # (Error Code 7)
+#### **7) Error Code:** CustomerKeyVaultSubscriptionDisabled
+
+**Description:** The Subscription for Key Vault is disabled.
+
+**Resolution:** Your Azure subscription can get disabled due to various reasons. Please refer to the guide for [Reactivating a disabled Azure subscription](../cost-management-billing/manage/subscription-disabled.md) and take the necessary action.
+</br></br>
+++
azure-app-configuration Enable Dynamic Configuration Java Spring App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/enable-dynamic-configuration-java-spring-app.md
# Tutorial: Use dynamic configuration in a Java Spring app
-App Configuration has two libraries for Spring. `spring-cloud-azure-appconfiguration-config` requires Spring Boot and takes a dependency on `spring-cloud-context`. `spring-cloud-azure-appconfiguration-config-web` requires Spring Web along with Spring Boot. Both libraries support manual triggering to check for refreshed configuration values. `spring-cloud-azure-appconfiguration-config-web` also adds support for automatic checking of configuration refresh.
+App Configuration has two libraries for Spring. `azure-spring-cloud-appconfiguration-config` requires Spring Boot and takes a dependency on `spring-cloud-context`. `azure-spring-cloud-appconfiguration-config-web` requires Spring Web along with Spring Boot. Both libraries support manual triggering to check for refreshed configuration values. `azure-spring-cloud-appconfiguration-config-web` also adds support for automatic checking of configuration refresh.
-Refresh allows you to refresh your configuration values without having to restart your application, though it will cause all beans in the `@RefreshScope` to be recreated. The client library caches a hash id of the currently loaded configurations to avoid too many calls to the configuration store. The refresh operation doesn't update the value until the cached value has expired, even when the value has changed in the configuration store. The default expiration time for each request is 30 seconds. It can be overridden if necessary.
+Refresh allows you to refresh your configuration values without having to restart your application, though it will cause all beans in the `@RefreshScope` to be recreated. The client library caches a hash ID of the currently loaded configurations to avoid too many calls to the configuration store. The refresh operation doesn't update the value until the cached value has expired, even when the value has changed in the configuration store. The default expiration time for each request is 30 seconds. It can be overridden if necessary.
-`spring-cloud-azure-appconfiguration-config-web`'s automated refresh is triggered based off activity, specifically Spring Web's `ServletRequestHandledEvent`. If a `ServletRequestHandledEvent` is not triggered, `spring-cloud-azure-appconfiguration-config-web`'s automated refresh will not trigger a refresh even if the cache expiration time has expired.
+`azure-spring-cloud-appconfiguration-config-web`'s automated refresh is triggered based off activity, specifically Spring Web's `ServletRequestHandledEvent`. If a `ServletRequestHandledEvent` is not triggered, `azure-spring-cloud-appconfiguration-config-web`'s automated refresh will not trigger a refresh even if the cache expiration time has expired.
## Use manual refresh App Configuration exposes `AppConfigurationRefresh` which can be used to check if the cache is expired and if it is expired trigger a refresh. ```java
-import com.microsoft.azure.spring.cloud.config.AppConfigurationRefresh;
+import com.azure.spring.cloud.config.AppConfigurationRefresh;
...
public void myConfigurationRefreshCheck() {
To use automated refresh, start with a Spring Boot app that uses App Configuration, such as the app you create by following the [Spring Boot quickstart for App Configuration](quickstart-java-spring-app.md).
-Then, open the *pom.xml* file in a text editor, and add a `<dependency>` for `spring-cloud-azure-appconfiguration-config-web`.
+Then, open the *pom.xml* file in a text editor and add a `<dependency>` for `azure-spring-cloud-appconfiguration-config-web` using the following code.
-**Spring Cloud 1.1.x**
+**Spring Boot**
```xml <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
- <version>1.1.5</version>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>azure-spring-cloud-appconfiguration-config-web</artifactId>
+ <version>2.0.0</version>
</dependency> ```
-**Spring Cloud 1.2.x**
+> [!NOTE]
+> If you need support for older dependencies see our [previous library](https://github.com/Azure/azure-sdk-for-jav).
-```xml
-<dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
- <version>1.2.7</version>
-</dependency>
-```
+1. Update `bootstrap.properties` to enable refresh
+
+ ```properties
+ spring.cloud.azure.appconfiguration.stores[0].monitoring.enabled=true
+ spring.cloud.azure.appconfiguration.stores[0].monitoring.triggers[0].key=sentinel
+ ```
+
+1. Open the **Azure Portal** and navigate to your App Configuration resource associated with your application. Select **Configuration Explorer** under **Operations** and create a new key-value pair by selecting **+ Create** > **Key-value** to add the following parameters:
+
+ | Key | Value |
+ |||
+ | sentinel | 1 |
-## Run and test the app locally
+ Leave **Label** and **Content Type** empty for now.
+
+1. Select **Apply**.
1. Build your Spring Boot application with Maven and run it.
Then, open the *pom.xml* file in a text editor, and add a `<dependency>` for `sp
1. Open a browser window, and go to the URL: `http://localhost:8080`. You will see the message associated with your key.
- You can also use *curl* to test your application, for example:
-
+ You can also use *curl* to test your application, for example:
+ ```cmd curl -X GET http://localhost:8080/ ```
Then, open the *pom.xml* file in a text editor, and add a `<dependency>` for `sp
| Key | Value | |||
- | application/config.message | Hello - Updated |
+ | /application/config.message | Hello - Updated |
+
+1. Update the sentinel key you created earlier to a new value. This change will trigger the application to refresh all configuration keys once the the refresh interval has passed.
+
+ | Key | Value |
+ |||
+ | sentinel | 2 |
1. Refresh the browser page to see the new message displayed.
azure-app-configuration Enable Dynamic Configuration Java Spring Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/enable-dynamic-configuration-java-spring-push-refresh.md
+
+ Title: "Tutorial: Use dynamic configuration using push refresh in a single instance Java Spring app"
+
+description: In this tutorial, you learn how to dynamically update the configuration data for a Java Spring app using push refresh
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ms.devlang: java
+ Last updated : 04/05/2021++
+#Customer intent: I want to use push refresh to dynamically update my app to use the latest configuration data in App Configuration.
+
+# Tutorial: Use dynamic configuration using push refresh in a Java Spring app
+
+The App Configuration Java Spring client library supports updating configuration on demand without causing an application to restart. An application can be configured to detect changes in App Configuration using one or both of the following two approaches.
+
+- Poll Model: This is the default behavior that uses polling to detect changes in configuration. Once the cached value of a setting expires, the next call to `AppConfigurationRefresh`'s `refreshConfigurations` sends a request to the server to check if the configuration has changed, and pulls the updated configuration if needed.
+
+- Push Model: This uses [App Configuration events](./concept-app-configuration-event.md) to detect changes in configuration. Once App Configuration is set up to send key value change events with Event Grid, with a [Web Hook](/azure/event-grid/handler-event-hubs), the application can use these events to optimize the total number of requests needed to keep the configuration updated.
+
+This tutorial shows how you can implement dynamic configuration updates in your code using push refresh. It builds on the app introduced in the quickstarts. Before you continue, finish [Create a Java Spring app with App Configuration](./quickstart-java-spring-app.md) first.
+
+You can use any code editor to do the steps in this tutorial. [Visual Studio Code](https://code.visualstudio.com/) is an excellent option that's available on the Windows, macOS, and Linux platforms.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Set up a subscription to send configuration change events from App Configuration to a Web Hook
+> * Deploy an a Spring Boot application to App Service
+> * Set up your Java Spring app to update its configuration in response to changes in App Configuration.
+> * Consume the latest configuration in your application.
+
+## Prerequisites
+
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+- A supported [Java Development Kit (JDK)](/java/azure/jdk) with version 8.
+- [Apache Maven](https://maven.apache.org/download.cgi) version 3.0 or above.
+- An existing Azure App Configuration Store.
++
+## Setup Push Refresh
+
+1. Open *pom.xml* and update the file with the following dependencies.
+
+ ```xml
+ <dependency>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>azure-spring-cloud-appconfiguration-config-web</artifactId>
+ <version>2.0.0</version>
+ </dependency>
+
+ <!-- Adds the Ability to Push Refresh -->
+ <dependency>
+ <groupId>org.springframework.boot</groupId>
+ <artifactId>spring-boot-starter-actuator</artifactId>
+ </dependency>
+ ```
+
+1. Setup [Maven App Service Deployment](/azure/app-service/quickstart-java?tabs=javase) so the application can be deployed to Azure App Service via Maven.
+
+ ```console
+ mvn com.microsoft.azure:azure-webapp-maven-plugin:1.12.0:config
+ ```
+
+1. Open bootstrap.properties and configure Azure App Configuration Push Refresh and Azure Service Bus
+
+ ```properties
+ # Azure App Configuration Properties
+ spring.cloud.azure.appconfiguration.stores[0].connection-string= ${AppConfigurationConnectionString}
+ spring.cloud.azure.appconfiguration.stores[0].monitoring.enabled= true
+ spring.cloud.azure.appconfiguration.stores[0].monitoring.cacheExpiration= 30d
+ spring.cloud.azure.appconfiguration.stores[0].monitoring.triggers[0].key= sentinel
+ spring.cloud.azure.appconfiguration.stores[0].monitoring.push-notification.primary-token.name= myToken
+ spring.cloud.azure.appconfiguration.stores[0].monitoring.push-notification.primary-token.secret= myTokenSecret
+
+ management.endpoints.web.exposure.include= "appconfiguration-refresh"
+ ```
+
+A random delay is added before the cached value is marked as dirty to reduce potential throttling. The default maximum delay before the cached value is marked as dirty is 30 seconds.
+
+> [!NOTE]
+> The Primary token name should be stored in App Configuration as a key, and then the Primary token secret should be stores as an App Configuration Key Vault Reference for added security.
+
+## Build and run the app locally
+
+Event Grid Web Hooks require validation on creation. You can validate by following this [guide](/azure/event-grid/webhook-event-delivery) or by starting your application with Azure App Configuration Spring Web Library already configured, which will register your application for you. To use an event subscription, follow the steps in the next two sections.
+
+1. Set the environment variable to your App Configuration instance's connection string:
+
+ #### [Windows command prompt](#tab/cmd)
+
+ ```cmd
+ setx AppConfigurationConnectionString <connection-string-of-your-app-configuration-store>
+ ```
+
+ #### [PowerShell](#tab/powershell)
+
+ ```PowerShell
+ $Env:AppConfigurationConnectionString = <connection-string-of-your-app-configuration-store>
+ ```
+
+ #### [Bash](#tab/bash)
+
+ ```bash
+ export AppConfigurationConnectionString = <connection-string-of-your-app-configuration-store>
+ ```
+
+1. Run the following command to build the console app:
+
+ ```shell
+ mvn package
+ ```
+
+1. After the build successfully completes, run the following command to run the app locally:
+
+ ```shell
+ mvn spring-boot:deploy
+ ```
+
+## Set up an event subscription
+
+1. Open the App Configuration resource in the Azure portal, then click on `+ Event Subscription` in the `Events` pane.
+
+ :::image type="content" source="./media/events-pane.png" alt-text="The events pane has an option to create new Subscriptions." :::
+
+1. Enter a name for the `Event Subscription` and the `System Topic`. By default the Event Types Key-Value modified and Key-Value deleted are set, this can be changed along with using the Filters tab to choose the exact reasons a Push Event will be sent.
+
+ :::image type="content" source="./media/create-event-subscription.png" alt-text="Events require a name, topic, and filters." :::
+
+1. Select the `Endpoint Type` as `Web Hook`, select `Select an endpoint`.
+
+ :::image type="content" source="./media/event-subscription-webhook-endpoint.png" alt-text="Selecting Endpoint creates a new blade to enter the endpoint URI." :::
+
+1. The endpoint is the URI of the application + "/actuator/appconfiguration-refresh?{your-token-name}={your-token-secret}". For example `https://my-azure-webapp.azurewebsites.net/actuator/appconfiguration-refresh?myToken=myTokenSecret`
+
+1. Click on `Create` to create the event subscription. When `Create` is selected a registration request for the Web Hook will be sent to your application. This is received by the Azure App Configuration client library, verified, and returns a valid response.
+
+1. Click on `Event Subscriptions` in the `Events` pane to validated that the subscription was created successfully.
+
+ :::image type="content" source="./media/event-subscription-view-webhook.png" alt-text="Web Hook shows up in a table on the bottom of the page." :::
+
+> [!NOTE]
+> When subscribing for configuration changes, one or more filters can be used to reduce the number of events sent to your application. These can be configured either as [Event Grid subscription filters](/azure/event-grid/event-filtering.md) or [Service Bus subscription filters](/azure/service-bus-messaging/topic-filters.md). For example, a subscription filter can be used to only subscribe to events for changes in a key that starts with a specific string.
+
+## Verify and test application
+
+1. After your application is running, use *curl* to test your application, for example:
+
+ ```cmd
+ curl -X GET http://localhost:8080
+ ```
+
+1. Open the **Azure Portal** and navigate to your App Configuration resource associated with your application. Select **Configuration Explorer** under **Operations** and update the values of the following keys:
+
+ | Key | Value |
+ |||
+ | application/config.message | Hello - Updated |
+
+1. Refresh the browser page to see the new message displayed.
+
+## Clean up resources
++
+## Next steps
+
+In this tutorial, you enabled your Java app to dynamically refresh configuration settings from App Configuration. To learn how to use an Azure managed identity to streamline the access to App Configuration, continue to the next tutorial.
+
+> [!div class="nextstepaction"]
+> [Managed identity integration](./howto-integrate-azure-managed-service-identity.md)
azure-app-configuration Quickstart Feature Flag Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-feature-flag-spring-boot.md
Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boo
<dependency> <groupId>com.azure.spring</groupId> <artifactId>azure-spring-cloud-appconfiguration-config-web</artifactId>
- <version>2.0.0-beta.2</version>
+ <version>2.0.0</version>
</dependency> <dependency> <groupId>com.azure.spring</groupId>
azure-app-configuration Quickstart Java Spring App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-java-spring-app.md
Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boo
1. Open the *pom.xml* file in a text editor, and add the Spring Cloud Azure Config starter to the list of `<dependencies>`:
- **Spring Cloud 1.1.x**
+ **Spring Boot 2.4**
```xml <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>spring-cloud-azure-appconfiguration-config</artifactId>
- <version>1.1.5</version>
+ <groupId>com.azure.spring</groupId>
+ <artifactId>azure-spring-cloud-appconfiguration-config</artifactId>
+ <version>2.0.0</version>
</dependency> ```
- **Spring Cloud 1.2.x**
-
- ```xml
- <dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>spring-cloud-azure-appconfiguration-config</artifactId>
- <version>1.2.7</version>
- </dependency>
- ```
+ > [!NOTE]
+ > If you need to support an older version of Spring Boot see our [old library](https://github.com/Azure/azure-sdk-for-jav).
1. Create a new Java file named *MessageProperties.java* in the package directory of your app. Add the following lines:
Use the [Spring Initializr](https://start.spring.io/) to create a new Spring Boo
In this quickstart, you created a new App Configuration store and used it with a Java Spring app. For more information, see [Spring on Azure](/java/azure/spring-framework/). To learn how to enable your Java Spring app to dynamically refresh configuration settings, continue to the next tutorial. > [!div class="nextstepaction"]
-> [Enable dynamic configuration](./enable-dynamic-configuration-java-spring-app.md)
+> [Enable dynamic configuration](./enable-dynamic-configuration-java-spring-app.md)
azure-app-configuration Use Key Vault References Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/use-key-vault-references-spring-boot.md
To add a secret to the vault, you need to take just a few additional steps. In t
export AZURE_TENANT_ID ='tenantId' ``` - > [!NOTE] > These Key Vault credentials are only used within your application. Your application authenticates directly with Key Vault using these credentials without involving the App Configuration service. The Key Vault provides authentication for both your application and your App Configuration service without sharing or exposing keys. ## Update your code to use a Key Vault reference
-1. Create an environment variable called **APP_CONFIGURATION_ENDPOINT**. Set its value to the endpoint of your App Configuration store. You can find the endpoint on the **Access Keys** blade in the Azure portal. Restart the command prompt to allow the change to take effect.
-
+1. Create an environment variable called **APP_CONFIGURATION_ENDPOINT**. Set its value to the endpoint of your App Configuration store. You can find the endpoint on the **Access Keys** blade in the Azure portal. Restart the command prompt to allow the change to take effect.
-1. Open *bootstrap.properties* in the *resources* folder. Update this file to use the **APP_CONFIGURATION_ENDPOINT** value. Remove any references to a connection string in this file.
+1. Open *bootstrap.properties* in the *resources* folder. Update this file to use the **APP_CONFIGURATION_ENDPOINT** value. Remove any references to a connection string in this file.
```properties spring.cloud.azure.appconfiguration.stores[0].endpoint= ${APP_CONFIGURATION_ENDPOINT}
To add a secret to the vault, you need to take just a few additional steps. In t
import com.azure.core.credential.TokenCredential; import com.azure.identity.EnvironmentCredentialBuilder;
- import com.microsoft.azure.spring.cloud.config.AppConfigurationCredentialProvider;
- import com.microsoft.azure.spring.cloud.config.KeyVaultCredentialProvider;
+ import com.azure.spring.cloud.config.AppConfigurationCredentialProvider;
+ import com.azure.spring.cloud.config.KeyVaultCredentialProvider;
public class AzureCredentials implements AppConfigurationCredentialProvider, KeyVaultCredentialProvider{
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-guide.md
A function can have zero or more input bindings that can pass data to a function
### Output bindings
-To write to an output binding, you must apply an output binding attribute to the function method, which defined how to write to the bound service. The value returned by the method is written to the output binding. For example, the following example writes a string value to a message queue named `functiontesting2` by using an output binding:
+To write to an output binding, you must apply an output binding attribute to the function method, which defined how to write to the bound service. The value returned by the method is written to the output binding. For example, the following example writes a string value to a message queue named `myqueue-output` by using an output binding:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Queue/QueueFunction.cs" id="docsnippet_queue_output_binding" :::
azure-functions Durable Functions Sub Orchestrations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/durable/durable-functions-sub-orchestrations.md
def orchestrator_function(context: df.DurableOrchestrationContext):
device_id = context.get_input() # Step 1: Create an installation package in blob storage and return a SAS URL.
- sas_url = yield context.call_activity"CreateInstallationPackage", device_id)
+ sas_url = yield context.call_activity("CreateInstallationPackage", device_id)
# Step 2: Notify the device that the installation package is ready. yield context.call_activity("SendPackageUrlToDevice", { "id": device_id, "url": sas_url })
def orchestrator_function(context: df.DurableOrchestrationContext):
provisioning_tasks = [] id_ = 0 for device_id in device_IDs:
- child_id = context.instance_id + ":" + id_
+ child_id = f"{context.instance_id}:{id_}"
provision_task = context.call_sub_orchestrator("DeviceProvisioningOrchestration", device_id, child_id) provisioning_tasks.append(provision_task) id_ += 1
azure-maps How To Secure Spa App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/how-to-secure-spa-app.md
Title: How to secure a single page web application with non-interactive sign-in in Microsoft Azure Maps
+ Title: How to secure a single-page web application with non-interactive sign-in in Microsoft Azure Maps
-description: How to configure a single page web application with non-interactive Azure role-based access control (Azure RBAC) and Azure Maps Web SDK.
+description: How to configure a single-page web application with non-interactive Azure role-based access control (Azure RBAC) and Azure Maps Web SDK.
Last updated 06/21/2021
-# How to secure a single page web application with non-interactive sign-in
+# How to secure a single-page web application with non-interactive sign-in
-This article shows you how to secure a single page web application with Azure Active Directory (Azure AD), when the user is unable to sign in to Azure AD.
+This article describes how to secure a single-page web application with Azure Active Directory (Azure AD), when the user isn't able to sign in to Azure AD.
-To create this non-interactive authentication flow, we'll create an Azure Function secure web service that's responsible for acquiring access tokens from Azure AD. This web service will be exclusively available only to your single page web application.
+To create this non-interactive authentication flow, we'll create an Azure Function secure web service that's responsible for acquiring access tokens from Azure AD. This web service will be exclusively available only to your single-page web application.
[!INCLUDE [authentication details](./includes/view-authentication-details.md)] > [!Tip]
-> Azure maps can support access tokens from user sign-on / interactive flows. Interactive flows enable a more restricted scope of access revocation and secret management.
+> Azure Maps can support access tokens from user sign-on or interactive flows. You can use interactive flows for a more restricted scope of access revocation and secret management.
-## Create Azure Function
+## Create an Azure function
To create a secured web service application that's responsible for authentication to Azure AD:
-1. Create a function in the Azure portal. For more information, see [Create Azure Function](../azure-functions/functions-get-started.md).
+1. Create a function in the Azure portal. For more information, see [Getting started with Azure Functions](../azure-functions/functions-get-started.md).
-2. Configure CORS policy on the Azure function to be accessible by the single page web application. The CORS policy secures browser clients to the allowed origins of your web application. For more information, see [Add CORS functionality](../app-service/app-service-web-tutorial-rest-api.md#add-cors-functionality).
+2. Configure CORS policy on the Azure function to be accessible by the single-page web application. The CORS policy secures browser clients to the allowed origins of your web application. For more information, see [Add CORS functionality](../app-service/app-service-web-tutorial-rest-api.md#add-cors-functionality).
3. [Add a system-assigned identity](../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity) on the Azure function to enable creation of a service principal to authenticate to Azure AD.
-4. Grant role-based access for the system-assigned identity to the Azure Maps account. See [Grant role-based access](#grant-role-based-access-for-users-to-azure-maps) for details.
+4. Grant role-based access for the system-assigned identity to the Azure Maps account. For details, see [Grant role-based access](#grant-role-based-access-for-users-to-azure-maps).
5. Write code for the Azure function to obtain Azure Maps access tokens using system-assigned identity with one of the supported mechanisms or the REST protocol. For more information, see [Obtain tokens for Azure resources](../app-service/overview-managed-identity.md?tabs=dotnet#add-a-system-assigned-identity)
- A sample REST protocol example:
+ Here's an example REST protocol:
```http GET /MSI/token?resource=https://atlas.microsoft.com/&api-version=2019-08-01 HTTP/1.1 Host: localhost:4141 ```
- Sample response:
+ And here's an example response:
```http HTTP/1.1 200 OK
To create a secured web service application that's responsible for authenticatio
} ```
-6. Configure security for the Azure function HttpTrigger
+6. Configure security for the Azure function HttpTrigger:
- * [Create a function access key](../azure-functions/functions-bindings-http-webhook-trigger.md?tabs=csharp#authorization-keys)
- * [Secure HTTP endpoint](../azure-functions/functions-bindings-http-webhook-trigger.md?tabs=csharp#secure-an-http-endpoint-in-production) for the Azure function in production.
+ 1. [Create a function access key](../azure-functions/functions-bindings-http-webhook-trigger.md?tabs=csharp#authorization-keys)
+ 1. [Secure HTTP endpoint](../azure-functions/functions-bindings-http-webhook-trigger.md?tabs=csharp#secure-an-http-endpoint-in-production) for the Azure function in production.
-7. Configure web application Azure Maps Web SDK.
+7. Configure a web application Azure Maps Web SDK.
```javascript //URL to custom endpoint to fetch Access token
To create a secured web service application that's responsible for authenticatio
## Next steps
-Further understanding of Single Page Application Scenario:
+Further understanding of a single-page application scenario:
> [!div class="nextstepaction"] > [Single-page application](../active-directory/develop/scenario-spa-overview.md)
Find the API usage metrics for your Azure Maps account:
Explore other samples that show how to integrate Azure AD with Azure Maps: > [!div class="nextstepaction"]
-> [Azure Maps Samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/tree/master/src/ClientGrant)
+> [Azure Maps Samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples/tree/master/src/ClientGrant)
azure-monitor Availability Multistep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/availability-multistep.md
Title: Monitor with multi-step web tests - Azure Application Insights description: Set up multi-step web tests to monitor your web applications with Azure Application Insights Previously updated : 02/13/2021 Last updated : 07/21/2021 # Multi-step web tests
Last updated 02/13/2021
You can monitor a recorded sequence of URLs and interactions with a website via multi-step web tests. This article will walk you through the process of creating a multi-step web test with Visual Studio Enterprise. > [!NOTE]
-> Multi-step web tests depend on Visual Studio webtest files. It was [announced](https://devblogs.microsoft.com/devops/cloud-based-load-testing-service-eol/) that Visual Studio 2019 will be the last version with webtest functionality. It is important to understand that while no new features will be added, webtest functionality in Visual Studio 2019 is still currently supported and will continue to be supported during the support lifecycle of the product. The Azure Monitor product team has addressed questions regarding the future of multi-step availability tests [here](https://github.com/MicrosoftDocs/azure-docs/issues/26050#issuecomment-468814101).
-> </br>
> Multi-step web tests **are not supported** in the [Azure Government](../../azure-government/index.yml) cloud. > [!NOTE] > Multi-step web tests are categorized as classic tests and can be found under **Add Classic Test** in the Availability pane.
+## Multi-step webtest alternative
+
+Multi-step web tests depend on Visual Studio webtest files. It was [announced](https://devblogs.microsoft.com/devops/cloud-based-load-testing-service-eol/) that Visual Studio 2019 will be the last version with webtest functionality. It's important to understand that while no new features will be added, webtest functionality in Visual Studio 2019 is still currently supported and will continue to be supported during the support lifecycle of the product.
+
+We recommend using the [TrackAvailability](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) to submit [custom availability tests](./availability-azure-functions.md) instead of Multi-step web tests. This is the long term supported solution for multi request or authentication test scenarios. With TrackAvailability() and custom availability tests, you can run tests on any compute you want and use C# to easily author new tests.
+ ## Pre-requisites * Visual Studio 2017 Enterprise or greater.
azure-monitor Autoscale Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/autoscale/autoscale-best-practices.md
Azure Monitor autoscale applies only to [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/), [Cloud Services](https://azure.microsoft.com/services/cloud-services/), [App Service - Web Apps](https://azure.microsoft.com/services/app-service/web/), and [API Management services](../../api-management/api-management-key-concepts.md). ## Autoscale concepts- * A resource can have only *one* autoscale setting * An autoscale setting can have one or more profiles and each profile can have one or more autoscale rules. * An autoscale setting scales instances horizontally, which is *out* by increasing the instances and *in* by decreasing the number of instances.
Azure Monitor autoscale applies only to [Virtual Machine Scale Sets](https://azu
* Similarly, all successful scale actions are posted to the Activity Log. You can then configure an activity log alert so that you can be notified via email, SMS, or webhooks whenever there is a successful autoscale action. You can also configure email or webhook notifications to get notified for successful scale actions via the notifications tab on the autoscale setting. ## Autoscale best practices- Use the following best practices as you use autoscale. ### Ensure the maximum and minimum values are different and have an adequate margin between them- If you have a setting that has minimum=2, maximum=2 and the current instance count is 2, no scale action can occur. Keep an adequate margin between the maximum and minimum instance counts, which are inclusive. Autoscale always scales between these limits. ### Manual scaling is reset by autoscale min and max- If you manually update the instance count to a value above or below the maximum, the autoscale engine automatically scales back to the minimum (if below) or the maximum (if above). For example, you set the range between 3 and 6. If you have one running instance, the autoscale engine scales to three instances on its next run. Likewise, if you manually set the scale to eight instances, on the next run autoscale will scale it back to six instances on its next run. Manual scaling is temporary unless you reset the autoscale rules as well. ### Always use a scale-out and scale-in rule combination that performs an increase and decrease
-If you use only one part of the combination, autoscale will only take action in a single direction (scale out, or in) until it reaches the maximum, or minimum instance counts of defined in the profile. This is not optimal, ideally you want your resource to scale up at times of high usage to ensure availability. Similarly, at times of low usage you want your resource to scale down, so you can realize cost savings.
+If you use only one part of the combination, autoscale will only take action in a single direction (scale out, or in) until it reaches the maximum, or minimum instance counts, as defined in the profile. This is not optimal, ideally you want your resource to scale up at times of high usage to ensure availability. Similarly, at times of low usage you want your resource to scale down, so you can realize cost savings.
### Choose the appropriate statistic for your diagnostics metric For diagnostics metrics, you can choose among *Average*, *Minimum*, *Maximum* and *Total* as a metric to scale by. The most common statistic is *Average*.
In this case
> If the autoscale engine detects flapping could occur as a result of scaling to the target number of instances, it will also try to scale to a different number of instances between the current count and the target count. If flapping does not occur within this range, autoscale will continue the scale operation with the new target. ### Considerations for scaling threshold values for special metrics
- For special metrics such as Storage or Service Bus Queue length metric, the threshold is the average number of messages available per current number of instances. Carefully choose the threshold value for this metric.
+For special metrics such as Storage or Service Bus Queue length metric, the threshold is the average number of messages available per current number of instances. Carefully choose the threshold value for this metric.
Let's illustrate it with an example to ensure you understand the behavior better.
Similarly, when autoscale switches back to the default profile, it first checks
![autoscale settings](./media/autoscale-best-practices/insights-autoscale-best-practices-2.png) ### Considerations for scaling when multiple rules are configured in a profile- There are cases where you may have to set multiple rules in a profile. The following autoscale rules are used by the autoscale engine when multiple rules are set. On *scale-out*, autoscale runs if any rule is met.
azure-monitor Monitor Virtual Machine Alerts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md
Title: Monitor virtual machines with Azure Monitor - Alerts
-description: Describes how to create alerts from virtual machines and their guest workloads using Azure Monitor.
-
+ Title: 'Monitor virtual machines with Azure Monitor: Alerts'
+description: Learn how to create alerts from virtual machines and their guest workloads by using Azure Monitor.
+
Last updated 06/21/2021
-# Monitoring virtual machines with Azure Monitor - Alerts
-This article is part of the [Monitoring virtual machines and their workloads in Azure Monitor scenario](monitor-virtual-machine.md). It provides guidance on creating alert rules for your virtual machines and their guest operating systems. [Alerts in Azure Monitor](../alerts/alerts-overview.md) proactively notify you of interesting data and patterns in your monitoring data. There are no preconfigured alert rules for virtual machines, but you can create your own based on data collected by VM insights.
+# Monitor virtual machines with Azure Monitor: Alerts
+
+This article is part of the scenario [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It provides guidance on creating alert rules for your virtual machines and their guest operating systems. [Alerts in Azure Monitor](../alerts/alerts-overview.md) proactively notify you of interesting data and patterns in your monitoring data. There are no preconfigured alert rules for virtual machines, but you can create your own based on data collected by VM insights.
> [!NOTE]
-> The alerts described in this article do not include alerts created by [Azure Monitor for VM guest health](vminsights-health-overview.md) which is a feature currently in public preview. As this feature nears general availability, guidance for alerting will be consolidated.
+> The alerts described in this article don't include alerts created by [Azure Monitor for VM guest health](vminsights-health-overview.md), which is a feature currently in public preview. As this feature nears general availability, guidance for alerting will be consolidated.
> [!IMPORTANT]
-> Most alert rules have a cost that's dependent on the type of rule, how many dimensions it includes, and how frequently it's run. Refer to **Alert rules** in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before you create any alert rules.
-
+> Most alert rules have a cost that's dependent on the type of rule, how many dimensions it includes, and how frequently it's run. Before you create any alert rules, refer to **Alert rules** in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-
-## Choosing the alert type
+## Choose the alert type
The most common types of alert rules in Azure Monitor are [metric alerts](../alerts/alerts-metric.md) and [log query alerts](../alerts/alerts-log-query.md).
-The type of alert rule that you create for a particular scenario will depend on where the data is located that you're alerting on. You may have cases though where data for a particular alerting scenario is available in both Metrics and Logs, and you need to determine which rule type to use. You may also have flexibility in how you collect certain data and let your decision of alert rule type drive your decision for data collection method.
+The type of alert rule that you create for a particular scenario depends on where the data is located that you're alerting on. You might have cases where data for a particular alerting scenario is available in both Metrics and Logs, and you'll need to determine which rule type to use. You might also have flexibility in how you collect certain data and let your decision of alert rule type drive your decision for data collection method.
-It's typically the best strategy to use metric alerts instead of log alerts when possible since they're more responsive and stateful. This of course requires that the data you're alerting on is available in Metrics. VM insights currently sends all of its data to Logs, so you must install Azure Monitor agent to use metric alerts with data from the guest operating system. Use Log query alerts with metric data when its either not available in Metrics or you require additional logic beyond the relatively simple logic for a metric alert rule.
+Typically, the best strategy is to use metric alerts instead of log alerts when possible because they're more responsive and stateful. To use metric alerts, the data you're alerting on must be available in Metrics. VM insights currently sends all of its data to Logs, so you must install the Azure Monitor agent to use metric alerts with data from the guest operating system. Use Log query alerts with metric data when it's unavailable in Metrics or if you require logic beyond the relatively simple logic for a metric alert rule.
### Metric alert rules
-[Metric alert rules](../alerts/alerts-metric.md) are useful for alerting when a particular metric exceeds a threshold. For example, when the CPU of a machine is running high. The target of a metric alert rule can be a specific machine, a resource group, or a subscription. This allows you to create a single rule that applies to a group of machines.
+[Metric alert rules](../alerts/alerts-metric.md) are useful for alerting when a particular metric exceeds a threshold. An example is when the CPU of a machine is running high. The target of a metric alert rule can be a specific machine, a resource group, or a subscription. In this instance, you can create a single rule that applies to a group of machines.
Metric rules for virtual machines can use the following data: -- Host metrics for Azure virtual machines which are collected automatically. -- Metrics that are collected by Azure Monitor agent from the quest operating system. -
+- Host metrics for Azure virtual machines, which are collected automatically.
+- Metrics that are collected by the Azure Monitor agent from the guest operating system.
> [!NOTE]
-> When VM insights supports the Azure Monitor Agent which is currently in public preview, then it will send performance data from the guest operating system to Metrics so that you can use metric alerts.
--
+> When VM insights supports the Azure Monitor agent, which is currently in public preview, it sends performance data from the guest operating system to Metrics so that you can use metric alerts.
### Log alerts
-[Log alerts](../alerts/alerts-metric.md) can perform two different measurements of the result of a log query, each of which support distinct scenarios for monitoring virtual machines.
+[Log alerts](../alerts/alerts-metric.md) can perform two different measurements of the result of a log query, each of which supports distinct scenarios for monitoring virtual machines:
+
+- [Metric measurements](../alerts/alerts-unified-log.md#calculation-of-measure-based-on-a-numeric-column-such-as-cpu-counter-value): Creates a separate alert for each record in the query results that has a numeric value that exceeds a threshold defined in the alert rule. Metric measurements are ideal for non-numeric data such as Windows and Syslog events collected by the Log Analytics agent or for analyzing performance trends across multiple computers.
+- [Number of results](../alerts/alerts-unified-log.md#count-of-the-results-table-rows): Creates a single alert when a query returns at least a specified number of records. Number of results measurements are ideal for non-numeric data such as Windows and Syslog events collected by the [Log Analytics agent](../agents/log-analytics-agent.md) or for analyzing performance trends across multiple computers. You might also choose this strategy if you want to minimize your number of alerts or possibly create an alert only when multiple machines have the same error condition.
-- [Metric measurement](../alerts/alerts-unified-log.md#calculation-of-measure-based-on-a-numeric-column-such-as-cpu-counter-value) create a separate alert for each record in the query results that has a numeric value that exceeds a threshold defined in the alert rule. These are ideal for non-numeric data such and Windows and Syslog events collected by the Log Analytics agent or for analyzing performance trends across multiple computers.-- [Number of results](../alerts/alerts-unified-log.md#count-of-the-results-table-rows) create a single alert when a query returns at least a specified number of records. These are ideal for non-numeric data such and Windows and Syslog events collected by the [Log Analytics agent](../agents/log-analytics-agent.md) or for analyzing performance trends across multiple computers. You may also choose this strategy if you want to minimize your number of alerts or possibly create an alert only when multiple machines have the same error condition. ### Target resource and impacted resource > [!NOTE]
-> Resource-centric log alert rules, currently in public preview, will simplify log query alerts for virtual machines and replace the functionality currently provided by metric measurement queries. You can use the machine as a target for the rule which will better identify it as the affected resource. You can also apply a single alert rule to all machines in a particular resource group or description. When resource-center log query alerts become generally available, the guidance in this scenario will be updated.
+> Resource-centric log alert rules, currently in public preview, simplify log query alerts for virtual machines and replace the functionality currently provided by metric measurement queries. You can use the machine as a target for the rule, which better identifies it as the affected resource. You can also apply a single alert rule to all machines in a particular resource group or description. When resource-center log query alerts become generally available, the guidance in this scenario will be updated.
>
-Each alert in Azure Monitor has an **Affected resource** property which is defined by the target of the rule. For metric alert rules, the affected resource will be the computer which allows you to easily identify it in the standard alert view. Log query alerts will be associated with the workspace resource instead of the machine, even when you use a metric measurement alert that creates an alert for each computer. You need to view the details of the alert to view the computer that was affected.
-
-The computer name is stored in the **Impacted resource** property which you can view in the details of the alert. It's also displayed as a dimension in emails that are sent from the alert.
+Each alert in Azure Monitor has an **Affected resource** property, which is defined by the target of the rule. For metric alert rules, the affected resource is the computer, which allows you to easily identify it in the standard alert view. Log query alerts are associated with the workspace resource instead of the machine, even when you use a metric measurement alert that creates an alert for each computer. You need to view the details of the alert to view the computer that was affected.
+The computer name is stored in the **Impacted resource** property, which you can view in the details of the alert. It's also displayed as a dimension in emails that are sent from the alert.
-You may want to have a view that lists the alerts with the affected computer. You can do this with a custom workbook that uses a custom [Resource Graph](../../governance/resource-graph/overview.md) to provide this view. Following is a query that can be used to display alerts. Use the data source **Azure Resource Graph** in the workbook.
+You might want to have a view that lists the alerts with the affected computer. You can use a custom workbook that uses a custom [Resource Graph](../../governance/resource-graph/overview.md) to provide this view. Use the following query to display alerts, and use the data source **Azure Resource Graph** in the workbook.
```kusto alertsmanagementresources
alertsmanagementresources
| project Alert, AlertStatus, Computer ``` ## Common alert rules
-The following section lists common alert rules for virtual machines in Azure Monitor. Details for metric alerts and log metric measurement alerts are provided for each. See [Choosing the alert type](#choosing-the-alert-type) section above for guidance on which type of alert to use.
+The following section lists common alert rules for virtual machines in Azure Monitor. Details for metric alerts and log metric measurement alerts are provided for each. For guidance on which type of alert to use, see [Choose the alert type](#choose-the-alert-type).
-If you're not familiar with the process for creating alert rules in Azure Monitor, see the following for guidance:
+If you're unfamiliar with the process for creating alert rules in Azure Monitor, see the following articles for guidance:
- [Create, view, and manage metric alerts using Azure Monitor](../alerts/alerts-metric.md) - [Create, view, and manage log alerts using Azure Monitor](../alerts/alerts-log.md) - ### Machine unavailable
-The most basic requirement is to send an alert when a machine is unavailable. It could be stopped, the guest operating system could be unresponsive, or the agent could be unresponsive. There are a variety of ways to configure this alerting, but the most common is to use the heartbeat sent from the Log Analytics agent.
+The most basic requirement is to send an alert when a machine is unavailable. It could be stopped, the guest operating system could be unresponsive, or the agent could be unresponsive. There are various ways to configure this alerting, but the most common is to use the heartbeat sent from the Log Analytics agent.
#### Log query alert rules
-Log query alerts use the [Heartbeat table ](/azure/azure-monitor/reference/tables/heartbeat) which should have a heartbeat record every minute from each machine.
+Log query alerts use the [Heartbeat table](/azure/azure-monitor/reference/tables/heartbeat), which should have a heartbeat record every minute from each machine.
**Separate alerts**+ Use a metric measurement rule with the following query. ```kusto
Heartbeat
``` **Single alert**+ Use a number of results alert with the following query. ```kusto
Heartbeat
| where LastHeartbeat < ago(5m) ``` - #### Metric alert rules
-A metric called *Heartbeat* is included in each Log Analytics workspace. Each virtual machine connected to that workspace will send a heartbeat metric value each minute. Since the computer is a dimension on the metric, you can fire an alert when any computer fails to send a heartbeat. Set the **Aggregation type** to *Count* and the **Threshold** value to match the **Evaluation granularity**.
+A metric called *Heartbeat* is included in each Log Analytics workspace. Each virtual machine connected to that workspace sends a heartbeat metric value each minute. Because the computer is a dimension on the metric, you can fire an alert when any computer fails to send a heartbeat. Set the **Aggregation type** to **Count** and the **Threshold** value to match the **Evaluation granularity**.
-
-### CPU Alerts
+### CPU alerts
#### Metric alert rules | Target | Metric |
InsightsMetrics
#### Log alert rules
-**Available Memory in MB**
-
+**Available memory in MB**
```kusto InsightsMetrics
InsightsMetrics
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer ``` -
-**Available Memory in percentage**
+**Available memory in percentage**
```kusto InsightsMetrics
InsightsMetrics
| summarize AggregatedValue = avg(AvailableMemoryPercentage) by bin(TimeGenerated, 15m), Computer ``` - ### Disk alerts #### Metric alert rules
InsightsMetrics
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId ``` - **Logical disk used - individual disks** ```kusto
InsightsMetrics
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface | ``` ----- ## Comparison of log query alert measures
-To compare the behavior of the two log alert measures, here's a walkthrough of each to create an alert when the CPU of a virtual machine exceeds 80%. The data we need is in the [InsightsMetrics table](/azure/azure-monitor/reference/tables/insightsmetrics). Following is a simple query that returns the records that need to be evaluated for the alert. Each type of alert rule will use a variant of this query.
+To compare the behavior of the two log alert measures, here's a walk-through of each to create an alert when the CPU of a virtual machine exceeds 80 percent. The data you need is in the [InsightsMetrics table](/azure/azure-monitor/reference/tables/insightsmetrics). The following query returns the records that need to be evaluated for the alert. Each type of alert rule uses a variant of this query.
```kusto InsightsMetrics
InsightsMetrics
``` ### Metric measurement
-The **metric measurement** measure will create a separate alert for each record in a query that has a value that exceeds a threshold defined in the alert rule. These alert rules are ideal for virtual machine performance data since they create individual alerts for each computer. The log query for this measure needs to return a value for each machine. The threshold in the alert rule will determine if the value should fire an alert.
+The **metric measurement** measure creates a separate alert for each record in a query that has a value that exceeds a threshold defined in the alert rule. These alert rules are ideal for virtual machine performance data because they create individual alerts for each computer. The log query for this measure needs to return a value for each machine. The threshold in the alert rule determines if the value should fire an alert.
> [!NOTE]
-> Resource-centric log alert rules, currently in public preview, will simplify log query alerts for virtual machines and replace the functionality currently provided by metric measurement queries. You can use the machine as a target for the rule which will better identify it as the affected resource. You can also apply a single alert rule to all machines in a particular resource group or description. When resource-center log query alerts become generally available, the guidance in this scenario will be updated.
+> Resource-centric log alert rules, currently in public preview, simplify log query alerts for virtual machines and replace the functionality currently provided by metric measurement queries. You can use the machine as a target for the rule, which better identifies it as the affected resource. You can also apply a single alert rule to all machines in a particular resource group or description. When resource-center log query alerts become generally available, the guidance in this scenario will be updated.
#### Query
-The query for rules using metric measurement must include a record for each machine with a numeric property called *AggregatedValue*. This is the value that's compared to the threshold in the alert rule. The query doesn't need to compare this value to a threshold since the threshold is defined in the alert rule.
+The query for rules using metric measurement must include a record for each machine with a numeric property called **AggregatedValue**. This value is compared to the threshold in the alert rule. The query doesn't need to compare this value to a threshold because the threshold is defined in the alert rule.
```kusto InsightsMetrics
InsightsMetrics
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer ``` - #### Alert rule
-Select **Logs** from the Azure Monitor menu to Open Log Analytics. Make sure that the correct workspace is selected for your scope. If not, click **Select scope** in the top left and select the correct workspace. Paste in the query that has the logic you want and click **Run** to verify that it returns the correct results.
+On the Azure Monitor menu, select **Logs** to open Log Analytics. Make sure that the correct workspace is selected for your scope. If not, click **Select scope** in the upper left and select the correct workspace. Paste in the query that has the logic you want, and select **Run** to verify that it returns the correct results.
-Click **New alert rule** to create a rule with the current query. The rule will use your workspace for the **Resource**.
+Select **New alert rule** to create a rule with the current query. The rule uses your workspace for the **Resource**.
-Click the **Condition** to view the configuration. The query is already filled in with a graphical view of the value returned from the query for each computer. You can select the computer in the **Pivoted on** dropdown.
+Select **Condition** to view the configuration. The query is already filled in with a graphical view of the value returned from the query for each computer. Select the computer from the **Pivoted on** dropdown list.
-Scroll down to **Alert logic** and select **Metric measurement** for the **Based on** property. Since we want to alert when the utilization exceeds 80%, set the **Aggregate value** to *Greater than* and the **Threshold value** to *80*.
+Scroll down to **Alert logic**, and select **Metric measurement** for the **Based on** property. Because you want to alert when the utilization exceeds 80 percent, set **Aggregate value** to **Greater than** and **Threshold value** to **80**.
-Scroll down to **Alert logic** and select **Metric measurement** for the **Based on** property. Provide a **Threshold** value to compare to the value returned from the query. In this example, we'll use *80*. In **Trigger Alert Based On**, specify how many times the threshold must be exceeded before an alert is created. For example, you may not care if the processor exceeds a threshold once and then returns to normal, but you do care if it continues to exceed the threshold over multiple consecutive measurements. For this example, we'll set **Consecutive breaches** to *3*.
+Scroll down to **Alert logic**, and select **Metric measurement** for the **Based on** property. Provide a **Threshold** value to compare to the value returned from the query. In this example, use **80**. In **Trigger Alert Based On**, specify how many times the threshold must be exceeded before an alert is created. For example, you might not care if the processor exceeds a threshold once and then returns to normal, but you do care if it continues to exceed the threshold over multiple consecutive measurements. For this example, set **Consecutive breaches** to **3**.
-Scroll down to **Evaluated based on**. **Period** specifies the time span for the query. Specify a value of **15** minutes, which means that the query will only use data collected in the last 15 minutes. **Frequency** specifies how often the query is run. A lower value will make the alert rule more responsive but also have a higher cost. Specify **15** to run the query every 15 minutes.
+Scroll down to **Evaluated based on**. **Period** specifies the time span for the query. Specify a value of **15** minutes, which means that the query only uses data collected in the last 15 minutes. **Frequency** specifies how often the query is run. A lower value makes the alert rule more responsive but also has a higher cost. Specify **15** to run the query every 15 minutes.
### Number of results rule
-The **number of results** rule will create a single alert when a query returns at least a specified number of records. The log query in this type of alert rule will typically identify the alerting condition, while the threshold for the alert rule determines if a sufficient number of records are returned.
-
+The **number of results** rule creates a single alert when a query returns at least a specified number of records. The log query in this type of alert rule typically identifies the alerting condition, while the threshold for the alert rule determines if a sufficient number of records are returned.
#### Query
-In this example, the threshold for the CPU utilization is included in the query. The number of records returned from the query will be the number of machines exceeding that threshold. The threshold for the alert rule is the minimum number of machines required to fire the alert. If you want an alert when a single machine is in error, then the threshold for the alert rule will be zero.
+In this example, the threshold for the CPU utilization is included in the query. The number of records returned from the query is the number of machines exceeding that threshold. The threshold for the alert rule is the minimum number of machines required to fire the alert. If you want an alert when a single machine is in error, the threshold for the alert rule is zero.
```kusto InsightsMetrics
InsightsMetrics
| where AverageUtilization > 80 ``` - #### Alert rule
-Select **Logs** from the Azure Monitor menu to Open Log Analytics. Make sure that the correct workspace is selected for your scope. If not, click **Select scope** in the top left and select the correct workspace. Paste in the query that has the logic you want and click **Run** to verify that it returns the correct results. You probably don't have a machine currently over threshold, so change to a lower threshold temporarily to verify results and then set the appropriate threshold before creating the alert rule.
---
-Click **New alert rule** to create a rule with the current query. The rule will use your workspace for the **Resource**.
-
-Click the **Condition** to view the configuration. The query is already filled in with a graphical view of the number of records that would have been returned from that query over the past several minutes.
-
-Scroll down to **Alert logic** and select **Number of results** for the **Based on** property. For this example, we want an alert if any records are returned, which means that at least one virtual machine has a processor above 80%. Select *Greater than* for the **Operator** and *0* for the **Threshold value**.
+On the Azure Monitor menu, select **Logs** to open Log Analytics. Make sure that the correct workspace is selected for your scope. If not, click **Select scope** in the upper left and select the correct workspace. Paste in the query that has the logic you want, and select **Run** to verify that it returns the correct results. You probably don't have a machine currently over threshold, so change to a lower threshold temporarily to verify results. Then set the appropriate threshold before you create the alert rule.
-Scroll down to **Evaluated based on**. **Period** specifies the time span for the query. Specify a value of **15** minutes, which means that the query will only use data collected in the last 15 minutes. **Frequency** specifies how often the query is run. A lower value will make the alert rule more responsive but also have a higher cost. Specify **15** to run the query every 15 minutes.
+Select **New alert rule** to create a rule with the current query. The rule uses your workspace for the **Resource**.
+Select the **Condition** to view the configuration. The query is already filled in with a graphical view of the number of records that have been returned from that query over the past several minutes.
+Scroll down to **Alert logic**, and select **Number of results** for the **Based on** property. For this example, you want an alert if any records are returned, which means that at least one virtual machine has a processor above 80 percent. Select **Greater than** for the **Operator** and **0** for the **Threshold value**.
+Scroll down to **Evaluated based on**. **Period** specifies the time span for the query. Specify a value of **15** minutes, which means that the query only uses data collected in the last 15 minutes. **Frequency** specifies how often the query is run. A lower value makes the alert rule more responsive but also has a higher cost. Specify **15** to run the query every 15 minutes.
## Next steps * [Monitor workloads running on virtual machines.](monitor-virtual-machine-workloads.md)
-* [Analyze monitoring data collected for virtual machines.](monitor-virtual-machine-analyze.md)
+* [Analyze monitoring data collected for virtual machines.](monitor-virtual-machine-analyze.md)
azure-monitor Monitor Virtual Machine Analyze https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine-analyze.md
Title: Monitor virtual machines with Azure Monitor - Analyze monitoring data
-description: Describes the different features of Azure Monitor that allow you to analyze the health and performance of your virtual machines.
-
+ Title: 'Monitor virtual machines with Azure Monitor: Analyze monitoring data'
+description: Learn about the different features of Azure Monitor that you can use to analyze the health and performance of your virtual machines.
+
Last updated 06/21/2021
-# Monitoring virtual machines with Azure Monitor - Analyze monitoring data
-This article is part of the [Monitoring virtual machines and their workloads in Azure Monitor scenario](monitor-virtual-machine.md). It describes how to analyze monitoring data for your virtual machines after you've completed their configuration.
+# Monitor virtual machines with Azure Monitor: Analyze monitoring data
+This article is part of the scenario [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It describes how to analyze monitoring data for your virtual machines after you've completed their configuration.
-Once youΓÇÖve enabled VM insights on your virtual machines, data will be available for analysis. This article describes the different features of Azure Monitor that allow you to analyze the health and performance of your virtual machines. Several of these features provide a different experience depending on whether you're analyzing a single machine or multiple. Each experience is described here with any unique behavior of each feature depending on which experience is being used.
+After you've enabled VM insights on your virtual machines, data will be available for analysis. This article describes the different features of Azure Monitor that you can use to analyze the health and performance of your virtual machines. Several of these features provide a different experience depending on whether you're analyzing a single machine or multiple. Each experience is described here with any unique behavior of each feature depending on which experience is being used.
> [!NOTE] > This article includes guidance on analyzing data that's collected by Azure Monitor and VM insights. For data that you configure to monitor workloads running on virtual machines, see [Monitor workloads](monitor-virtual-machine-workloads.md). -- ## Single machine experience
-Access the single machine analysis experience from the **Monitoring** section of the menu in the Azure portal for each Azure virtual machine and Azure Arc enabled server. These options either limit the data that you're viewing to that machine or at least sets an initial filter for it. This allows you to focus on that particular machine, viewing its current performance and its trending over time, helping to identify any issues it maybe experiencing.
---- **Overview page.** Click the **Monitoring** tab to display [platform metrics](../essentials/data-platform-metrics.md) for the virtual machine host. This gives you a quick view of the trend over different time periods for important metrics such as CPU, network, and disk. Since these are host metrics though, counters from the guest operating system such as memory aren't included. Click on a graph to work with this data in [metrics explorer](../essentials/metrics-getting-started.md) where you can perform different aggregations and add additional counters for analysis.---- **Activity log.** [Activity log](../essentials/activity-log.md#view-the-activity-log) entries filtered for the current virtual machine. Use this to view the recent activity of the machine such as any configuration changes and when it's been stopped and started. ---- **Insights.** Open [VM insights](../vm/vminsights-overview.md) with the map for the current virtual machine selected. This shows you running processes on the machine, dependencies on other machines and external processes. See [Use the Map feature of VM insights to understand application components](vminsights-maps.md#view-a-map-from-a-vm) for details on using the map view for a single machine. -
- Click on the **Performance** tab to view trends of critical performance counters over different periods of time. When you open VM insights from the virtual machine menu, you also have a table with detailed metrics for each disk. See [How to chart performance with VM insights](vminsights-performance.md#view-performance-directly-from-an-azure-vm) for details on using the map view for a single machine.
--- **Alerts.** Views [alerts](../alerts/alerts-overview.md) for the current virtual machine. These are only alerts that use the machine as the target resource, so there may be other alerts associated with it. You may need to use the **Alerts** option in the Azure Monitor menu to view alerts for all resources. See [Monitoring virtual machines with Azure Monitor - Alerts](monitor-virtual-machine-alerts.md) for details.--- **Metrics.** Open metrics explorer with the scope set to the machine. This is the same as selecting one of the performance charts from the **Overview** page except that the metric isn't already added.--- **Diagnostic settings.** Enable and configure [diagnostics extension](../agents/diagnostics-extension-overview.md) for the current virtual machine. Note that this option is different than the **Diagnostic settings** option for other Azure resources. Only enable the diagnostic extension if you need to send data to Azure Event Hubs or Azure Storage.-
+Access the single machine analysis experience from the **Monitoring** section of the menu in the Azure portal for each Azure virtual machine and Azure ArcΓÇôenabled server. These options either limit the data that you're viewing to that machine or at least set an initial filter for it. In this way, you can focus on a particular machine, view its current performance and its trending over time, and help to identify any issues it might be experiencing.
-- **Advisor recommendations.** Recommendations for the current virtual machine from [Azure Advisor](../../advisor/index.yml). -- **Logs.** Open [Log Analytics](../logs/log-analytics-overview.md) with the [scope](../logs/scope.md) set to the current virtual machine. This allows you to select from a variety of existing queries to drill into log and performance data for only this machine.
+- **Overview page**: Select the **Monitoring** tab to display [platform metrics](../essentials/data-platform-metrics.md) for the virtual machine host. You get a quick view of the trend over different time periods for important metrics, such as CPU, network, and disk. Because these are host metrics though, counters from the guest operating system such as memory aren't included. Select a graph to work with this data in [metrics explorer](../essentials/metrics-getting-started.md) where you can perform different aggregations, and add more counters for analysis.
+- **Activity log**: See [activity log](../essentials/activity-log.md#view-the-activity-log) entries filtered for the current virtual machine. Use this log to view the recent activity of the machine, such as any configuration changes and when it was stopped and started.
+- **Insights**: Open [VM insights](../vm/vminsights-overview.md) with the map for the current virtual machine selected. The map shows you running processes on the machine, dependencies on other machines, and external processes. For details on how to use the Map view for a single machine, see [Use the Map feature of VM insights to understand application components](vminsights-maps.md#view-a-map-from-a-vm).
+ Select the **Performance** tab to view trends of critical performance counters over different periods of time. When you open VM insights from the virtual machine menu, you also have a table with detailed metrics for each disk. For details on how to use the Map view for a single machine, see [Chart performance with VM insights](vminsights-performance.md#view-performance-directly-from-an-azure-vm).
-- **Connection monitor.** Open [Network Watcher Connection Monitor](../../network-watcher/connection-monitor-overview.md) to monitor connections between the current virtual machine and other virtual machines. ---- **Workbooks.** Open the workbook gallery with the VM insights workbooks for single machines. See [VM insights workbooks](vminsights-workbooks.md#vm-insights-workbooks) for a list of the VM insights workbooks designed for individual machines.
+- **Alerts**: View [alerts](../alerts/alerts-overview.md) for the current virtual machine. These alerts only use the machine as the target resource, so there might be other alerts associated with it. You might need to use the **Alerts** option in the Azure Monitor menu to view alerts for all resources. For details, see [Monitor virtual machines with Azure Monitor - Alerts](monitor-virtual-machine-alerts.md).
+- **Metrics**: Open metrics explorer with the scope set to the machine. This option is the same as selecting one of the performance charts from the **Overview** page except that the metric isn't already added.
+- **Diagnostic settings**: Enable and configure the [diagnostics extension](../agents/diagnostics-extension-overview.md) for the current virtual machine. This option is different than the **Diagnostic settings** option for other Azure resources. Only enable the diagnostic extension if you need to send data to Azure Event Hubs or Azure Storage.
+- **Advisor recommendations**: See recommendations for the current virtual machine from [Azure Advisor](../../advisor/index.yml).
+- **Logs**: Open [Log Analytics](../logs/log-analytics-overview.md) with the [scope](../logs/scope.md) set to the current virtual machine. You can select from a variety of existing queries to drill into log and performance data for only this machine.
+- **Connection monitor**: Open [Network Watcher Connection Monitor](../../network-watcher/connection-monitor-overview.md) to monitor connections between the current virtual machine and other virtual machines.
+- **Workbooks**: Open the workbook gallery with the VM insights workbooks for single machines. For a list of the VM insights workbooks designed for individual machines, see [VM insights workbooks](vminsights-workbooks.md#vm-insights-workbooks).
## Multiple machine experience
-Access the multiple machine analysis experience from the **Monitor** menu in the Azure portal for each Azure virtual machine and Azure Arc enabled server. These options provide access to all data so that you can select the virtual machines that you're interested in comparing.
----- **Activity log.** [Activity log](../essentials/activity-log.md#view-the-activity-log) entries filtered for all resources. Create a filter for a **Resource Type** of Virtual Machines or Virtual Machine Scale Sets to view events for all of your machines.--- **Alerts.** View [alerts](../alerts/alerts-overview.md) for all resources this includes alerts related to virtual machines but that are associated with the workspace. Create a filter for a **Resource Type** of Virtual Machines or Virtual Machine Scale Sets to view alerts for all of your machines. -
+Access the multiple machine analysis experience from the **Monitor** menu in the Azure portal for each Azure virtual machine and Azure ArcΓÇôenabled server. These options provide access to all data so that you can select the virtual machines that you're interested in comparing.
-- **Metrics.** Open [metrics explorer](../essentials/metrics-getting-started.md) with no scope selected. This is particularly useful when you want to compare trends across multiple machines. Select a subscription or a resource group to quickly add a group of machines to analyze together. -- **Logs** Open [Log Analytics](../logs/log-analytics-overview.md) with the [scope](../logs/scope.md) set to the workspace. This allows you to select from a variety of existing queries to drill into log and performance data for all machines. Or create a custom query to perform additional analysis.
+- **Activity log**: See [activity log](../essentials/activity-log.md#view-the-activity-log) entries filtered for all resources. Create a filter for a **Resource Type** of virtual machines or virtual machine scale sets to view events for all your machines.
+- **Alerts**: View [alerts](../alerts/alerts-overview.md) for all resources, which includes alerts related to virtual machines but that are associated with the workspace. Create a filter for a **Resource Type** of virtual machines or virtual machine scale sets to view alerts for all your machines.
+- **Metrics**: Open [metrics explorer](../essentials/metrics-getting-started.md) with no scope selected. This feature is particularly useful when you want to compare trends across multiple machines. Select a subscription or a resource group to quickly add a group of machines to analyze together.
+- **Logs**: Open [Log Analytics](../logs/log-analytics-overview.md) with the [scope](../logs/scope.md) set to the workspace. You can select from a variety of existing queries to drill into log and performance data for all machines. Or you can create a custom query to perform additional analysis.
+- **Workbooks**: Open the workbook gallery with the VM insights workbooks for multiple machines. For a list of the VM insights workbooks designed for multiple machines, see [VM insights workbooks](vminsights-workbooks.md#vm-insights-workbooks).
+- **Virtual Machines**: Open [VM insights](../vm/vminsights-overview.md) with the **Get Started** tab open. This action displays all machines in your Azure subscription and identifies which are being monitored. Use this view to onboard individual machines that aren't already being monitored.
+ Select the **Performance** tab to compare trends of critical performance counters for multiple machines over different periods of time. Select all machines in a subscription or resource group to include in the view. For details on how to use the Map view for a single machine, see [Chart performance with VM insights](vminsights-performance.md#view-performance-directly-from-an-azure-vm).
-- **Workbooks** Open the workbook gallery with the VM insights workbooks for multiple machines. See [VM insights workbooks](vminsights-workbooks.md#vm-insights-workbooks) for a list of the VM insights workbooks designed for multiple machines. --- **Virtual Machines.** Open [VM insights](../vm/vminsights-overview.md) with the **Get Started** tab open. This displays all machines in your Azure subscription, identifying which are being monitored. Use this view to onboard individual machines that aren't already being monitored.-
- Click on the **Performance** tab to compare trends of critical performance counters for multiple machines over different periods of time. Select all machines in a subscription or resource group to include in the view. See [How to chart performance with VM insights](vminsights-performance.md#view-performance-directly-from-an-azure-vm) for details on using the map view for a single machine.
-
- Click on the Map tab to view running processes on machines, dependencies between machines and external processes. Select all machines in a subscription or resource group, or inspect the data for a single machine. See [Use the Map feature of VM insights to understand application components](vminsights-maps.md#view-a-map-from-azure-monitor) for details on using the map view for multiple machines.
+ Select the **Map** tab to view running processes on machines, dependencies between machines, and external processes. Select all machines in a subscription or resource group, or inspect the data for a single machine. For details on how to use the Map view for multiple machines, see [Use the Map feature of VM insights to understand application components](vminsights-maps.md#view-a-map-from-azure-monitor).
## Compare Metrics and Logs
-For many features of Azure Monitor, you don't need to understand the different types of data it uses and where it's stored. You can use VM insights, for example, without any understanding of what data is being used to populate the Performance view, Map view, and workbooks. You just focus on the logic that you're analyzing. As you dig deeper though, there are cases where you will need to understand the difference between [Metrics](../essentials/data-platform-metrics.md) and [Logs](../logs/data-platform-logs.md) since different features of Azure Monitor use different kinds of data, and the type of alerting that you use for a particular scenario will depend on having that data available in a particular location.
+For many features of Azure Monitor, you don't need to understand the different types of data it uses and where it's stored. You can use VM insights, for example, without any understanding of what data is being used to populate the Performance view, Map view, and workbooks. You just focus on the logic that you're analyzing. As you dig deeper, you'll need to understand the difference between [Metrics](../essentials/data-platform-metrics.md) and [Logs](../logs/data-platform-logs.md). Different features of Azure Monitor use different kinds of data. The type of alerting that you use for a particular scenario depends on having that data available in a particular location.
+This level of detail can be confusing if you're new to Azure Monitor. The following information helps you understand the differences between the types of data:
-This can be confusing if you're new to Azure Monitor, but the following details should help you understand the differences between the types of data.
--- Any non-numeric data such as events is stored in Logs. Metrics can only include numeric data that's sampled at regular intervals.-- Numeric data can be stored in both Metrics and Logs so it can be analyzed in different ways and support different types of alerts.-- Performance data from the guest operating system will be sent to Logs by VM insights using the Log Analytics agent.-- Performance data from the guest operating system will be sent to Metrics by Azure Monitor agent.
+- Any non-numeric data, such as events, is stored in Logs. Metrics can only include numeric data that's sampled at regular intervals.
+- Numeric data can be stored in both Metrics and Logs so that it can be analyzed in different ways and support different types of alerts.
+- Performance data from the guest operating system is sent to Logs by VM insights by using the Log Analytics agent.
+- Performance data from the guest operating system is sent to Metrics by the Azure Monitor agent.
> [!NOTE]
-> The Azure Monitor agent and send data to both Metrics and Logs. In this scenario, it's only used for Metrics since Log Analytics agent sends data to Logs and as currently required for VM insights. When VM insights uses the Azure Monitor agent, this scenario will be updated to remove the Log Analytics agent.
+> The Azure Monitor agent sends data to both Metrics and Logs. In this scenario, it's only used for Metrics because the Log Analytics agent sends data to Logs as currently required for VM insights. When VM insights uses the Azure Monitor agent, this scenario will be updated to remove the Log Analytics agent.
## Analyze data with VM insights
-VM insights includes multiple performance charts that help you quickly get a status of the operation of your monitored machines, their trending performance over time, and dependencies between machines and processes. It also offers a consolidated view of different aspects of any monitored machine such as its properties and events collected in the Log Analytics workspace.
+VM insights includes multiple performance charts that help you quickly get a status of the operation of your monitored machines, their trending performance over time, and dependencies between machines and processes. It also offers a consolidated view of different aspects of any monitored machine, such as its properties and events collected in the Log Analytics workspace.
-The **Get Started** tab displays all machines in your Azure subscription,identifying which are being monitored. Use this view to quickly identify which machines aren't being monitored and to onboard individual machines that aren't already being monitored.
+The **Get Started** tab displays all machines in your Azure subscription and identifies which ones are being monitored. Use this view to quickly identify which machines aren't being monitored and to onboard individual machines that aren't already being monitored.
-The **Performance** view includes multiple charts with several key performance indicators (KPIs) to help you determine how well machines are performing. The charts show resource utilization over a period of time so you can identify bottlenecks, anomalies, or switch to a perspective listing each machine to view resource utilization based on the metric selected. See [How to chart performance with VM insights](vminsights-performance.md) for details on using the performance view.
+The **Performance** view includes multiple charts with several key performance indicators (KPIs) to help you determine how well machines are performing. The charts show resource utilization over a period of time. You can use them to identify bottlenecks, see anomalies, or switch to a perspective listing each machine to view resource utilization based on the metric selected. For details on how to use the Performance view, see [Chart performance with VM insights](vminsights-performance.md).
-Use the **Map** view to see running processes on machines and their dependencies on other machines and external processes. You can change the time window for the view to determine if these dependencies have changed from another time period. See [Use the Map feature of VM insights to understand application components](vminsights-maps.md) for details on using the map view.
+Use the **Map** view to see running processes on machines and their dependencies on other machines and external processes. You can change the time window for the view to determine if these dependencies have changed from another time period. For details on how to use the Map view, see [Use the Map feature of VM insights to understand application components](vminsights-maps.md).
## Analyze metric data with metrics explorer
-Metrics explorer allows you plot charts, visually correlate trends, and investigate spikes and dips in metrics' values. See [Getting started with Azure Metrics Explorer](../essentials/metrics-getting-started.md) for details on using this tool.
+By using metrics explorer, you can plot charts, visually correlate trends, and investigate spikes and dips in metrics' values. For details on how to use this tool, see [Getting started with Azure Metrics Explorer](../essentials/metrics-getting-started.md).
-There are three namespaces used by virtual machines:
+Three namespaces are used by virtual machines.
| Namespace | Description | Requirement | |:|:|:| | Virtual Machine Host | Host metrics automatically collected for all Azure virtual machines. Detailed list of metrics at [Microsoft.Compute/virtualMachines](../essentials/metrics-supported.md#microsoftcomputevirtualmachines). | Collected automatically with no configuration required. |
-| Guest (classic) | Limited set of guest operating system and application performance data. Available in metrics explorer but not other Azure Monitor features such as metric alerts. | [Diagnostic extension](../agents/diagnostics-extension-overview.md) installed. Data is read from Azure storage. |
+| Guest (classic) | Limited set of guest operating system and application performance data. Available in metrics explorer but not other Azure Monitor features, such as metric alerts. | [Diagnostic extension](../agents/diagnostics-extension-overview.md) installed. Data is read from Azure Storage. |
| Virtual Machine Guest | Guest operating system and application performance data available to all Azure Monitor features using metrics. | [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) installed with a [Data Collection Rule](../agents/data-collection-rule-overview.md). | - ## Analyze log data with Log Analytics
-Log Analytics allows you to perform custom analysis of your log data. Use Log Analytics when you want to dig deeper into the data used to create the views in VM insights. You may want to analyze different logic and aggregations of that data, correlate security data collected by Azure Security Center and Azure Sentinel with your health and availability data, or work with data collected for your [workloads](monitor-virtual-machine-workloads.md).
-
+By using Log Analytics, you can perform custom analysis of your log data. Use Log Analytics when you want to dig deeper into the data used to create the views in VM insights. You might want to analyze different logic and aggregations of that data, correlate security data collected by Azure Security Center and Azure Sentinel with your health and availability data, or work with data collected for your [workloads](monitor-virtual-machine-workloads.md).
-You don't necessarily need to understand how to write a log query to use Log Analytics. There are multiple prebuilt queries that you can select and either run without modification or use as a start to a custom query. Click **Queries** at the top of the Log Analytics screen and view queries with a **Resource type** of **Virtual machines** or **Virtual machine Scale Sets**. See [Using queries in Azure Monitor Log Analytics](../logs/queries.md) for information on using these queries and [Log Analytics tutorial](../logs/log-analytics-tutorial.md) for a complete tutorial on using Log Analytics to run queries and work with their results.
+You don't necessarily need to understand how to write a log query to use Log Analytics. There are multiple prebuilt queries that you can select and either run without modification or use as a start to a custom query. Select **Queries** at the top of the Log Analytics screen, and view queries with a **Resource type** of **Virtual machines** or **Virtual machine scale sets**. For information on how to use these queries, see [Using queries in Azure Monitor Log Analytics](../logs/queries.md). For a tutorial on how to use Log Analytics to run queries and work with their results, see [Log Analytics tutorial](../logs/log-analytics-tutorial.md).
-When you launch the Launch Log Analytics from VM insights using the properties pane in either the **Performance** or **Map** view, it lists the tables that have data for the selected computer. Click on a table to open Log Analytics with a simple query that returns all records in that table for the selected computer. Work with these results or modify the query for more complex analysis. The [scope](../log/../logs/scope.md) set to the workspace meaning that you have access data for all computers using that workspace.
+When you start Log Analytics from VM insights by using the properties pane in either the **Performance** or **Map** view, it lists the tables that have data for the selected computer. Select a table to open Log Analytics with a simple query that returns all records in that table for the selected computer. Work with these results or modify the query for more complex analysis. The [scope](../log/../logs/scope.md) set to the workspace means that you have access data for all computers using that workspace.
## Visualize data with workbooks
-[Workbooks](../visualize/workbooks-overview.MD) provide interactive reports in the Azure portal, combining different kinds of data into a single view. Workbooks combine text,ΓÇ»[log queries](/azure/data-explorer/kusto/query/), metrics, and parameters into rich interactive reports. Workbooks are editable by any other team members who have access to the same Azure resources.
+[Workbooks](../visualize/workbooks-overview.MD) provide interactive reports in the Azure portal and combine different kinds of data into a single view. Workbooks combine text,ΓÇ»[log queries](/azure/data-explorer/kusto/query/), metrics, and parameters into rich interactive reports. Workbooks are editable by any other team members who have access to the same Azure resources.
Workbooks are helpful for scenarios such as:
-* Exploring the usage of your virtual machine when you don't know the metrics of interest in advance: CPU utilization, disk space, memory, network dependencies, etc. Unlike other usage analytics tools, workbooks let you combine multiple kinds of visualizations and analyses, making them great for this kind of free-form exploration.
+* Exploring the usage of your virtual machine when you don't know the metrics of interest in advance like CPU utilization, disk space, memory, and network dependencies. Unlike other usage analytics tools, workbooks let you combine multiple kinds of visualizations and analyses, which make them great for this kind of free-form exploration.
* Explaining to your team how a recently provisioned VM is performing, by showing metrics for key counters and other log events.
-* Sharing the results of a resizing experiment of your VM with other members of your team. You can explain the goals for the experiment with text, then show each usage metric and analytics queries used to evaluate the experiment, along with clear call-outs for whether each metric was above- or below-target.
+* Sharing the results of a resizing experiment of your VM with other members of your team. You can explain the goals for the experiment with text. Then you can show each usage metric and analytics queries used to evaluate the experiment, along with clear call-outs for whether each metric was above or below target.
* Reporting the impact of an outage on the usage of your VM, combining data, text explanation, and a discussion of next steps to prevent outages in the future.
-VM insights includes the following workbooks. You can use these workbooks or use them as a start to create custom workbooks to address your particular requirements.
+VM insights include the following workbooks. You can use these workbooks or use them as a start to create custom workbooks to address your particular requirements.
### Single virtual machine | Workbook | Description | |-|-|
-| Performance | Provides a customizable version of the Performance view that leverages all of the Log Analytics performance counters that you have enabled. |
+| Performance | Provides a customizable version of the Performance view that uses all the Log Analytics performance counters that you have enabled. |
| Connections | Provides an in-depth view of the inbound and outbound connections from your VM. | ### Multiple virtual machines | Workbook | Description | |-|-|
-| Performance | Provides a customizable version of the Top N List and Charts view in a single workbook that leverages all of the Log Analytics performance counters that you have enabled.|
-| Performance counters | A Top N chart view across a wide set of performance counters. |
+| Performance | Provides a customizable version of the Top N List and Charts view in a single workbook that uses all the Log Analytics performance counters that you have enabled.|
+| Performance counters | Provides a Top N Chart view across a wide set of performance counters. |
| Connections | Provides an in-depth view of the inbound and outbound connections from your monitored machines. | | Active Ports | Provides a list of the processes that have bound to the ports on the monitored machines and their activity in the chosen timeframe. | | Open Ports | Provides the number of ports open on your monitored machines and the details on those open ports. |
-| Failed Connections | Display the count of failed connections on your monitored machines, the failure trend, and if the percentage of failures is increasing over time. |
-| Security and Audit | An analysis of your TCP/IP traffic that reports on overall connections, malicious connections, where the IP endpoints reside globally. To enable all features, you will need to enable Security Detection. |
-| TCP Traffic | A ranked report for your monitored machines and their sent, received, and total network traffic in a grid and displayed as a trend line. |
-| Traffic Comparison | Compare network traffic trends for a single machine or a group of machines. |
-| Log Analytics agent | Analyze the health of your agents including the number of agents connecting to a workspace, which are unhealthy, and the effect of the agent on the performance of the machine. This workbook isn't available from VM insights like the other workbooks. Go to **Workbooks** in the Azure Monitor menu and select **Public Templates**. |
+| Failed Connections | Displays the count of failed connections on your monitored machines, the failure trend, and if the percentage of failures is increasing over time. |
+| Security and Audit | Provides an analysis of your TCP/IP traffic that reports on overall connections, malicious connections, and where the IP endpoints reside globally. To enable all features, you'll need to enable Security Detection. |
+| TCP Traffic | Provides a ranked report for your monitored machines and their sent, received, and total network traffic in a grid and displayed as a trend line. |
+| Traffic Comparison | Compares network traffic trends for a single machine or a group of machines. |
+| Log Analytics agent | Analyzes the health of your agents, including the number of agents connecting to a workspace that are unhealthy, and the effect of the agent on the performance of the machine. This workbook isn't available from VM insights like the other workbooks. On the Azure Monitor menu, go to **Workbooks** and select **Public Templates**. |
-See [Create interactive reports VM insights with workbooks](vminsights-workbooks.md) for detailed instructions on creating your own custom workbooks.
+For instructions on how to create your own custom workbooks, see [Create interactive reports VM insights with workbooks](vminsights-workbooks.md).
## Next steps
-* [Create alerts from collected data.](monitor-virtual-machine-alerts.md)
-* [Monitor workloads running on virtual machines.](monitor-virtual-machine-workloads.md)
+* [Create alerts from collected data](monitor-virtual-machine-alerts.md)
+* [Monitor workloads running on virtual machines](monitor-virtual-machine-workloads.md)
azure-resource-manager Operators Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/operators-access.md
+
+ Title: Bicep access operators
+description: Describes Bicep resource access operator and property access operator.
+++ Last updated : 07/22/2021++
+# Bicep access operators
+
+The access operators are used to access properties of objects and resources. To run the examples, use Azure CLI or Azure PowerShell to [deploy a Bicep file](./quickstart-create-bicep-use-visual-studio-code.md#deploy-the-bicep-file).
+
+| Operator | Name |
+| - | - |
+| `.` | [Nested resource accessor](#nested-resource-accessor) |
+| `.` | [Property accessor](#property-accessor) |
+
+## Nested resource accessor
+
+`<resource-symbolic-name>.<property-name>`
+
+Nested resource accessors are used to access resources that are declared inside another resource. The symbolic name declared by a nested resource can normally only be referenced within the body of the containing resource. To reference a nested resource outside the containing resource, it must be qualified with the containing resource name and the [::](./child-resource-name-type.md) operator. Other resources declared within the same containing resource can use the name without qualification.
+
+### Example
+
+```bicep
+resource myParent 'My.Rp/parentType@2020-01-01' = {
+ name: 'myParent'
+ location: 'West US'
+
+ // declares a nested resource inside 'myParent'
+ resource myChild 'childType' = {
+ name: 'myChild'
+ properties: {
+ displayName: 'Child Resource'
+ }
+ }
+
+ // 'myChild' can be referenced inside the body of 'myParent'
+ resource mySibling 'childType' = {
+ name: 'mySibling'
+ properties: {
+ displayName: 'Sibling of ${myChild.properties.displayName}'
+ }
+ }
+}
+
+// accessing 'myChild' here requires the resource access operator
+output displayName string = myParent::myChild.properties.displayName
+```
+
+Because the declaration of `myChild` is contained within `myParent`, the access to `myChild`'s properties must be qualified with `myParent::`.
+
+## Property accessor
+
+`<object-name>.<property-name>`
+
+Property accessors are used to access properties of an object. Property accessors can be used with any object, including parameters and variables of object types and object literals. Using a property accessor on an expression of non-object type is an error.
+
+### Example
+
+```bicep
+var x = {
+ y: {
+ z: 'Hello'
+ a: true
+ }
+ q: 42
+}
+
+output outputZ string = x.y.z
+output outputQ int = x.q
+```
+
+Output from the example:
+
+| Name | Type | Value |
+| - | - | - |
+| `outputZ` | string | 'Hello' |
+| `outputQ` | integer | 42 |
+
+## Next steps
+
+- To create a Bicep file, see [Quickstart: Create Bicep files with Visual Studio Code](./quickstart-create-bicep-use-visual-studio-code.md).
+- For information about how to resolve Bicep type errors, see [Any function for Bicep](./bicep-functions-any.md).
+- To compare syntax for Bicep and JSON, see [Comparing JSON and Bicep for templates](./compare-template-syntax.md).
+- For examples of Bicep functions, see [Bicep functions](./bicep-functions.md).
azure-resource-manager Operators https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/operators.md
description: Describes the Bicep operators available for Azure Resource Manager
Previously updated : 06/23/2021 Last updated : 07/22/2021 # Bicep operators
-This article describes the Bicep operators that are available when you create a Bicep template and use Azure Resource Manager to deploy resources. Operators are used to calculate values, compare values, or evaluate conditions. There are three types of Bicep operators:
+This article describes the Bicep operators that are available when you create a Bicep template and use Azure Resource Manager to deploy resources. Operators are used to calculate values, compare values, or evaluate conditions. There are four types of Bicep operators:
- [comparison](#comparison) - [logical](#logical) - [numeric](#numeric)
+- [access](#access)
Enclosing an expression between `(` and `)` allows you to override the default Bicep operator precedence. For example, the expression x + y / z evaluates the division first and then the addition. However, the expression (x + y) / z evaluates the addition first and division second.
The numeric operators use integers to do calculations and return integer values.
> Subtract and minus use the same operator. The functionality is different because subtract uses two > operands and minus uses one operand.
+## Access
+
+The access operators are used to access properties of objects and resources.
+
+| Operator | Name | Description |
+| - | - | - |
+| `.` | [Nested resource accessor](./operators-access.md#nested-resource-accessor) | Access properties of a nested resource. |
+| `.` | [Property accessor](./operators-access.md#property-accessor) | Access properties of an object. |
+ ## Operator precedence and associativity The operators below are listed in descending order of precedence (the higher the position the higher the precedence). Operators listed at the same level have equal precedence.
azure-resource-manager Microsoft Solutions Armapicontrol https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/managed-applications/microsoft-solutions-armapicontrol.md
The control's output is not displayed to the user. Instead, the result of the op
For example, an ARM call into `Microsoft.Network/expressRouteCircuits` resource provider: ```json
- "path": "<subid>/resourceGroup/<resourceGroupName>/providers/Microsoft.Network/expressRouteCircuits/<routecircuitName>/?api-version=2020-05-01"
+ "path": "subscriptions/<subid>/resourceGroup/<resourceGroupName>/providers/Microsoft.Network/expressRouteCircuits/<routecircuitName>/?api-version=2020-05-01"
``` - The `request.body` property is optional. Use it to specify a JSON body that is sent with the request. The body can be static content or constructed dynamically by referring to output values from other controls.
azure-sql Connect Query Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/connect-query-python.md
To further explore Python and the database in Azure SQL Database, see [Azure SQL
server = '<server>.database.windows.net' database = '<database>' username = '<username>'
- password = '<password>'
+ password = '{<password>}'
driver= '{ODBC Driver 17 for SQL Server}'
- with pyodbc.connect('DRIVER='+driver+';SERVER='+server+';PORT=1433;DATABASE='+database+';UID='+username+';PWD='+ password) as conn:
+ with pyodbc.connect('DRIVER='+driver+';SERVER=tcp:'+server+';PORT=1433;DATABASE='+database+';UID='+username+';PWD='+ password) as conn:
with conn.cursor() as cursor: cursor.execute("SELECT TOP 3 name, collation_name FROM sys.databases") row = cursor.fetchone()
azure-sql Resource Limits Vcore Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-single-databases.md
Previously updated : 06/23/2021 Last updated : 07/21/2021 # Resource limits for single databases using the vCore purchasing model [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|In-memory OLTP storage (GB)|172|216|304|704|1768| |Max data size (GB)|1280|1536|2048|4096|4096| |Max log size (GB) <sup>1</sup>|427|512|683|1024|1024|
-|TempDB max data size (GB)|4096|2048|1024|768|640|
+|TempDB max data size (GB)|640|768|1024|2048|4096|
|[Max local storage size](resource-limits-logical-server.md#storage-space-governance) (GB)|13836|13836|13836|13836|13836| |Storage type|Local SSD|Local SSD|Local SSD|Local SSD|Local SSD| |IO latency (approximate)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|1-2 ms (write)<br>1-2 ms (read)|
azure-sql Sql Data Sync Data Sql Server Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-data-sync-data-sql-server-sql-database.md
Provisioning and deprovisioning during sync group creation, update, and deletion
- Truncating tables is not an operation supported by Data Sync (changes won't be tracked). - Hyperscale databases are not supported. - Memory-optimized tables are not supported.-- If the hub and member databases are in a virtual network, Data Sync won't work because the sync app, which is responsible for running sync between hub and members, does not support accessing the hub or member databases inside a customer's private link. This limitation still applies when customer also uses the Data Sync Private Link feature. #### Unsupported data types
azure-sql Log Replay Service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/log-replay-service-migrate.md
We recommend the following best practices:
- Split full and differential backups into multiple files, instead of using a single file. - Enable backup compression. - Use Cloud Shell to run scripts, because it will always be updated to the latest cmdlets released.-- Plan to complete the migration within 47 hours after you start LRS. This is a grace period that prevents the installation of system-managed software patches.
+- Plan to complete the migration within 36 hours after you start LRS. This is a grace period that prevents the installation of system-managed software patches.
> [!IMPORTANT] > - You can't use the database that's being restored through LRS until the migration process finishes.
az sql midb log-replay start <required parameters> &
``` > [!IMPORTANT]
-> After you start LRS, any system-managed software patches are halted for 47 hours. After this window, the next automated software patch will automatically stop LRS. If that happens, you can't resume migration and need to restart it from scratch.
+> After you start LRS, any system-managed software patches are halted for 36 hours. After this window, the next automated software patch will automatically stop LRS. If that happens, you can't resume migration and need to restart it from scratch.
## Monitor the migration progress
az sql midb log-replay complete -g mygroup --mi myinstance -n mymanageddb --last
Functional limitations of LRS are: - The database that you're restoring can't be used for read-only access during the migration process.-- System-managed software patches are blocked for 47 hours after you start LRS. After this time window expires, the next software update will stop LRS. You then need to restart LRS from scratch.
+- System-managed software patches are blocked for 36 hours after you start LRS. After this time window expires, the next software update will stop LRS. You then need to restart LRS from scratch.
- LRS requires databases on SQL Server to be backed up with the `CHECKSUM` option enabled. - The SAS token that LRS will use must be generated for the entire Azure Blob Storage container, and it must have only read and list permissions. - Backup files for different databases must be placed in separate folders on Blob Storage.
azure-sql Doc Changes Updates Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/doc-changes-updates-release-notes.md
vm-windows-sql-server Previously updated : 07/01/2021 Last updated : 07/21/2021 # Documentation changes for SQL Server on Azure Virtual Machines [!INCLUDE[appliesto-sqlvm](../../includes/appliesto-sqlvm.md)] Azure allows you to deploy a virtual machine (VM) with an image of SQL Server built in. This article summarizes the documentation changes associated with new features and improvements in the recent releases of [SQL Server on Azure Virtual Machines](https://azure.microsoft.com/services/virtual-machines/sql-server/).
+## July 2021
+
+| Changes | Details |
+| | |
+| **Repair SQL Server IaaS extension in portal** | It's now possible to verify the status of your SQL Server IaaS Agent extension directly from the Azure portal, and [repair](sql-agent-extension-manually-register-single-vm.md#repair-extension) it, if necessary. |
++ ## June 2021 | Changes | Details |
azure-sql Sql Agent Extension Automatic Registration All Vms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-agent-extension-automatic-registration-all-vms.md
tags: azure-service-management -+ vm-windows-sql-server Last updated 11/07/2020
azure-sql Sql Agent Extension Manually Register Single Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/virtual-machines/windows/sql-agent-extension-manually-register-single-vm.md
ms.devlang: na
vm-windows-sql-server Previously updated : 11/07/2020 Last updated : 07/21/2021
New-AzSqlVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -Location $v
-## Verify mode
+## Check extension mode
-You can view the current mode of your SQL Server IaaS agent by using Azure PowerShell:
+Use Azure PowerShell to check what mode your SQL Server IaaS agent extension is in.
+
+To check the mode of the extension, use this Azure PowerShell cmdlet:
```powershell-interactive # Get the SqlVirtualMachine
To verify the registration status using the Azure portal, follow these steps:
![Verify status with SQL RP registration](./media/sql-agent-extension-manually-register-single-vm/verify-registration-status.png)
+Alternatively, you can check the status by choosing **Repair** under the **Support + troubleshooting** pane in the **SQL virtual machine** resource. The provisioning state for the SQL IaaS agent extension can be **Succeeded** or **Failed**.
+ ### Command line Verify current SQL Server VM registration status using either Azure CLI or Azure PowerShell. `ProvisioningState` will show `Succeeded` if registration was successful.
To verify the registration status using the Azure PowerShell, run the following
An error indicates that the SQL Server VM has not been registered with the extension.
+## Repair extension
+
+It's possible for your SQL IaaS agent extension to be in a failed state. Use the Azure portal to repair the SQL IaaS agent extension. To do so, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to your [SQL Server VMs](manage-sql-vm-portal.md).
+1. Select your SQL Server VM from the list. If your SQL Server VM is not listed here, it likely hasn't been registered with the SQL IaaS Agent extension.
+1. Select **Repair** under **Support + Troubleshooting** in the **SQL virtual machine** resource page.
+
+ :::image type="content" source="media/sql-agent-extension-manually-register-single-vm/repair-extension.png" alt-text="Select **Repair** under **Support + Troubleshooting** in the **SQL virtual machine** resource page":::
+
+1. If your provisioning state shows as **Failed**, choose **Repair** to repair the extension. If your state is **Succeeded** you can check the box next to **Force repair** to repair the extension regardless of state.
+
+ ![If your provisioning state shows as **Failed**, choose **Repair** to repair the extension. If your state is **Succeeded** you can check the box next to **Force repair** to repair the extension regardless of state.](./media/sql-agent-extension-manually-register-single-vm/force-repair-extension.png)
+++ ## Unregister from extension To unregister your SQL Server VM with the SQL IaaS Agent extension, delete the SQL virtual machine *resource* using the Azure portal or Azure CLI. Deleting the SQL virtual machine *resource* does not delete the SQL Server VM. However, use caution and follow the steps carefully because it is possible to inadvertently delete the virtual machine when attempting to remove the *resource*.
azure-video-analyzer Analyze Live Video Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/analyze-live-video-custom-vision.md
If you open the topology for this tutorial in a browser, you'll see that the val
3. Search and enable **Show Verbose Message**. ![Screenshot that shows Show Verbose Message.](./media/custom-vision/show-verbose-message.png)
-4. To start a debugging session, select the **F5** key. You see messages printed in the **TERMINAL** window.
+4. ::: zone pivot="programming-language-csharp"
+ [!INCLUDE [header](includes/common-includes/csharp-run-program.md)]
+ ::: zone-end
+
+ ::: zone pivot="programming-language-python"
+ [!INCLUDE [header](includes/common-includes/python-run-program.md)]
+ ::: zone-end
+ 5. The operations.json code starts off with calls to the direct methods `livePipelineList` and `livePipelineList`. If you cleaned up resources after you completed previous quickstarts, this process will return empty lists and then pause. To continue, select the **Enter** key. The **TERMINAL** window shows the next set of direct method calls:
azure-video-analyzer Analyze Live Video Use Your Model Grpc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/analyze-live-video-use-your-model-grpc.md
In this quickstart, you'll use Video Analyzer to detect objects such as vehicles
## Run the sample program
-1. To start a debugging session, select the F5 key. You see messages printed in the TERMINAL window.
+1. ::: zone pivot="programming-language-csharp"
+ [!INCLUDE [header](includes/common-includes/csharp-run-program.md)]
+ ::: zone-end
+
+ ::: zone pivot="programming-language-python"
+ [!INCLUDE [header](includes/common-includes/python-run-program.md)]
+ ::: zone-end
1. The **operations.json** code starts off with calls to the direct methods pipelineTopologyList and livePipelineList. If you cleaned up resources after you completed previous quickstarts, then this process will return empty lists and then pause. To continue, select the Enter key. ```
azure-video-analyzer Analyze Live Video Use Your Model Http https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/analyze-live-video-use-your-model-http.md
Open an application such as [VLC media player](https://www.videolan.org/vlc/). S
## Run the sample program
-1. To start a debugging session, select the F5 key. You see messages printed in the TERMINAL window.
+1. ::: zone pivot="programming-language-csharp"
+ [!INCLUDE [header](includes/common-includes/csharp-run-program.md)]
+ ::: zone-end
+
+ ::: zone pivot="programming-language-python"
+ [!INCLUDE [header](includes/common-includes/python-run-program.md)]
+ ::: zone-end
1. The operations.json code starts off with calls to the direct methods `pipelineTopologyList` and `livePipelineList`. If you cleaned up resources after you completed previous quickstarts, then this process will return empty lists and then pause. To continue, select the Enter key. ```
azure-video-analyzer Detect Motion Record Video Edge Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-docs/detect-motion-record-video-edge-devices.md
Complete the following steps to use Video Analyzer to detect the motion of the c
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/vscode-common-screenshots/verbose-message.png" alt-text= "Show Verbose Message":::
-1. Start a debugging session by selecting the F5 key. The **TERMINAL** window prints some messages.
+1. ::: zone pivot="programming-language-csharp"
+ [!INCLUDE [header](includes/common-includes/csharp-run-program.md)]
+ ::: zone-end
+
+ ::: zone pivot="programming-language-python"
+ [!INCLUDE [header](includes/common-includes/python-run-program.md)]
+ ::: zone-end
1. The _operations.json_ code calls the direct methods `pipelineTopologyList` and `livePipelineList`. If you cleaned up resources after previous quickstarts, then this process will return empty lists and then pause. Press the Enter key. ```
azure-video-analyzer Considerations When Use At Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-video-analyzer/video-analyzer-for-media-docs/considerations-when-use-at-scale.md
When you upload videos using URL, you just need to provide a path to the locatio
To see an example of how to upload videos using URL, check out [this example](upload-index-videos.md#code-sample). Or, you can use [AzCopy](../../storage/common/storage-use-azcopy-v10.md) for a fast and reliable way to get your content to a storage account from which you can submit it to Video Analyzer for Media using [SAS URL](../../storage/common/storage-sas-overview.md). Video Analyzer for Media recommends using *readonly* SAS URLs.
-## Increase media reserved units if needed
+## Increase media reserved units is no longer available through Video Analyzer for Media
-Usually in the proof of concept stage when you just start using Video Analyzer for Media, you donΓÇÖt need a lot of computing power. When you start having a larger archive of videos you need to index and you want the process to be at a pace that fits your use case, you need to scale up your usage of Video Analyzer for Media. Therefore, you should think about increasing the number of compute resources you use if the current amount of computing power is just not enough.
-
-In Azure Media Services, when you want to increase computing power and parallelization, you need to pay attention to media [reserved units](../../media-services/latest/concept-media-reserved-units.md)(RUs). The RUs are the compute units that determine the parameters for your media processing tasks. The number of RUs affects the number of media tasks that can be processed concurrently in each account and their type determines the speed of processing and one video might require more than one RU if its indexing is complex. When your RUs are busy, new tasks will be held in a queue until another resource is available.
-
-To operate efficiently and to avoid having resources that stay idle part of the time, Video Indexer offers an auto-scale system that spins RUs down when less processing is needed and spin RUs up when you are in your rush hours (up to fully use all of your RUs). You can enable this functionality by [turning on the autoscale](manage-account-connected-to-azure.md#autoscale-reserved-units) in the account settings or using [Update-Paid-Account-Azure-Media-Services API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Update-Paid-Account-Azure-Media-Services).
---
-To minimize indexing duration and low throughput we recommend, you start with 10 RUs of type S3. Later if you scale up to support more content or higher concurrency, and you need more resources to do so, you can [contact us using the support system](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) (on paid accounts only) to ask for more RUs allocation.
+Starting August 1st 2021, Azure Video Analyzer for Media (formerly Video Indexer) does not expose the option to increase Media [Reserved Units](https://docs.microsoft.com/azure/media-services/latest/concept-media-reserved-units)(MRUs) any longer. From now on MRUs are being auto scaled by [Azure Media Services](https://docs.microsoft.com/azure/media-services/latest/media-services-overview) (AMS), as a result you do not need to manage them through Azure Video Analyzer for Media.
## Respect throttling
azure-vmware Enable Public Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/enable-public-internet-access.md
In this scenario, you'll publish the IIS webserver to the internet. Use the publ
1. Select the Azure VMware Solution private cloud.
- :::image type="content" source="media/public-ip-usage/avs-private-cloud-resource.png" alt-text="Screenshot of the Azure VMware Solution private cloud." border="true" lightbox="media/public-ip-usage/avs-private-cloud-resource.png":::
+ :::image type="content" source="media/public-ip-usage/avs-private-cloud-resource.png" alt-text="Screenshot of the Azure VMware Solution private cloud." lightbox="media/public-ip-usage/avs-private-cloud-resource.png":::
1. Under **Manage**, select **Connectivity**.
- :::image type="content" source="media/public-ip-usage/avs-private-cloud-manage-menu.png" alt-text="Screenshot of the Connectivity section." border="true" lightbox="media/public-ip-usage/avs-private-cloud-manage-menu.png":::
+ :::image type="content" source="media/public-ip-usage/avs-private-cloud-manage-menu.png" alt-text="Screenshot of the Connectivity section." lightbox="media/public-ip-usage/avs-private-cloud-manage-menu.png":::
1. Select the **Public IP** tab and then select **Configure**.
- :::image type="content" source="media/public-ip-usage/connectivity-public-ip-tab.png" alt-text="Screenshot that shows where to begin to configure the public IP" border="true" lightbox="media/public-ip-usage/connectivity-public-ip-tab.png":::
+ :::image type="content" source="media/public-ip-usage/connectivity-public-ip-tab.png" alt-text="Screenshot that shows where to begin to configure the public IP." lightbox="media/public-ip-usage/connectivity-public-ip-tab.png":::
1. Accept the default values or change them, and then select **Create**.
We can check and add more public IP addresses by following the below steps.
1. Select a deployed firewall and then select **Visit Azure Firewall Manager to configure and manage this firewall**.
- :::image type="content" source="media/public-ip-usage/configure-manage-deployed-firewall.png" alt-text="Screenshot that shows the option to configure and manage the firewall" border="true" lightbox="media/public-ip-usage/configure-manage-deployed-firewall.png":::
+ :::image type="content" source="media/public-ip-usage/configure-manage-deployed-firewall.png" alt-text="Screenshot that shows the option to configure and manage the firewall." lightbox="media/public-ip-usage/configure-manage-deployed-firewall.png":::
1. Select **Secured virtual hubs** and, from the list, select a virtual hub.
- :::image type="content" source="media/public-ip-usage/select-virtual-hub.png" alt-text="Screenshot of Firewall Manager" lightbox="media/public-ip-usage/select-virtual-hub.png":::
+ :::image type="content" source="media/public-ip-usage/select-virtual-hub.png" alt-text="Screenshot of Firewall Manager." lightbox="media/public-ip-usage/select-virtual-hub.png":::
1. On the virtual hub page, select **Public IP configuration**, and to add more public IP address, then select **Add**.
- :::image type="content" source="media/public-ip-usage/virtual-hub-page-public-ip-configuration.png" alt-text="Screenshot of how to add a public IP configuration in Firewall Manager" border="true" lightbox="media/public-ip-usage/virtual-hub-page-public-ip-configuration.png":::
+ :::image type="content" source="media/public-ip-usage/virtual-hub-page-public-ip-configuration.png" alt-text="Screenshot of how to add a public IP configuration in Firewall Manager." lightbox="media/public-ip-usage/virtual-hub-page-public-ip-configuration.png":::
1. Provide the number of IPs required and select **Add**.
- :::image type="content" source="media/public-ip-usage/add-number-of-ip-addresses-required.png" alt-text="Screenshot to add a specified number of public IP configurations" border="true":::
+ :::image type="content" source="media/public-ip-usage/add-number-of-ip-addresses-required.png" alt-text="Screenshot to add a specified number of public IP configurations.":::
## Create firewall policies
Once all components are deployed, you can see them in the added Resource group.
1. Select a deployed firewall and then select **Visit Azure Firewall Manager to configure and manage this firewall**.
- :::image type="content" source="media/public-ip-usage/configure-manage-deployed-firewall.png" alt-text="Screenshot that shows the option to configure and manage the firewall" border="true" lightbox="media/public-ip-usage/configure-manage-deployed-firewall.png":::
+ :::image type="content" source="media/public-ip-usage/configure-manage-deployed-firewall.png" alt-text="Screenshot that shows the option to configure and manage the firewall." lightbox="media/public-ip-usage/configure-manage-deployed-firewall.png":::
1. Select **Azure Firewall Policies** and then select **Create Azure Firewall Policy**.
- :::image type="content" source="media/public-ip-usage/create-firewall-policy.png" alt-text="Screenshot of how to create a firewall policy in Firewall Manager" border="true" lightbox="media/public-ip-usage/create-firewall-policy.png":::
+ :::image type="content" source="media/public-ip-usage/create-firewall-policy.png" alt-text="Screenshot of how to create a firewall policy in Firewall Manager." lightbox="media/public-ip-usage/create-firewall-policy.png":::
1. Under the **Basics** tab, provide the required details and select **Next: DNS Settings**.
Once all components are deployed, you can see them in the added Resource group.
1. Select a hub from the list and select **Add**.
- :::image type="content" source="media/public-ip-usage/secure-hubs-with-azure-firewall-polcy.png" alt-text="Screenshot that shows the selected hubs that will be converted to Secured Virtual Hubs." border="true" lightbox="media/public-ip-usage/secure-hubs-with-azure-firewall-polcy.png":::
+ :::image type="content" source="media/public-ip-usage/secure-hubs-with-azure-firewall-polcy.png" alt-text="Screenshot that shows the selected hubs that will be converted to Secured Virtual Hubs." lightbox="media/public-ip-usage/secure-hubs-with-azure-firewall-polcy.png":::
1. Select **Next: Tags**.
azure-vmware Netapp Files With Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/netapp-files-with-azure-vmware-solution.md
Services where Azure NetApp Files are used:
The diagram shows a connection through Azure ExpressRoute to an Azure VMware Solution private cloud. The Azure VMware Solution environment accesses the Azure NetApp Files share mounted on Azure VMware Solution VMs.
-![Diagram showing NetApp Files for Azure VMware Solution architecture.](media/net-app-files/net-app-files-topology.png)
## Prerequisites
You'll verify the pre-configured Azure NetApp Files created in Azure on Azure Ne
1. In the Azure portal, under **STORAGE**, select **Azure NetApp Files**. A list of your configured Azure NetApp Files will show.
- :::image type="content" source="media/net-app-files/azure-net-app-files-list.png" alt-text="Screenshot showing list of pre-configured Azure NetApp Files.":::
+ :::image type="content" source="media/netapp-files/azure-netapp-files-list.png" alt-text="Screenshot showing list of pre-configured Azure NetApp Files.":::
2. Select a configured NetApp Files account to view its settings. For example, select **Contoso-anf2**. 3. Select **Capacity pools** to verify the configured pool.
- :::image type="content" source="media/net-app-files/net-app-settings.png" alt-text="Screenshot showing options to view capacity pools and volumes of a configured NetApp Files account.":::
+ :::image type="content" source="media/netapp-files/netapp-settings.png" alt-text="Screenshot showing options to view capacity pools and volumes of a configured NetApp Files account.":::
The Capacity pools page opens showing the capacity and service level. In this example, the storage pool is configured as 4 TiB with a Premium service level.
You'll verify the pre-configured Azure NetApp Files created in Azure on Azure Ne
5. Select a volume to view its configuration.
- :::image type="content" source="media/net-app-files/azure-net-app-volumes.png" alt-text="Screenshot showing volumes created under the capacity pool.":::
+ :::image type="content" source="media/netapp-files/azure-netapp-volumes.png" alt-text="Screenshot showing volumes created under the capacity pool.":::
A window opens showing the configuration details of the volume.
- :::image type="content" source="media/net-app-files/configuration-of-volume.png" alt-text="Screenshot showing configuration details of a volume.":::
+ :::image type="content" source="media/netapp-files/configuration-of-volume.png" alt-text="Screenshot showing configuration details of a volume.":::
You can see that anfvolume has a size of 200 GiB and is in capacity pool anfpool1. It's exported as an NFS file share via 10.22.3.4:/ANFVOLUME. One private IP from the Azure Virtual Network (VNet) was created for Azure NetApp Files and the NFS path to mount on the VM.
backup Backup Azure Arm Userestapi Backupazurevms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-userestapi-backupazurevms.md
Let's assume you want to protect a VM "testVM" under a resource group "testRG" t
### Discover unprotected Azure VMs
-First, the vault should be able to identify the Azure VM. This is triggered using the [refresh operation](/rest/api/backup/2021-02-10/protection-containers/refresh). It's an asynchronous *POST* operation that makes sure the vault gets the latest list of all unprotected VM in the current subscription and 'caches' them. Once the VM is 'cached', Recovery services will be able to access the VM and protect it.
+First, the vault should be able to identify the Azure VM. This is triggered using the [refresh operation](/rest/api/backup/protection-containers/refresh). It's an asynchronous *POST* operation that makes sure the vault gets the latest list of all unprotected VM in the current subscription and 'caches' them. Once the VM is 'cached', Recovery services will be able to access the VM and protect it.
```http POST https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{vaultresourceGroupname}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/refreshContainers?api-version=2016-12-01
X-Powered-By: ASP.NET
### Selecting the relevant Azure VM
- You can confirm that "caching" is done by [listing all protectable items](/rest/api/backup/2021-02-10/backup-protected-items/list) under the subscription and locate the desired VM in the response. [The response of this operation](#example-responses-to-get-operation) also gives you information on how Recovery Services identifies a VM. Once you are familiar with the pattern, you can skip this step and directly proceed to [enabling protection](#enabling-protection-for-the-azure-vm).
+ You can confirm that "caching" is done by [listing all protectable items](/rest/api/backup/backup-protectable-items/list) under the subscription and locate the desired VM in the response. [The response of this operation](#example-responses-to-get-operation) also gives you information on how Recovery Services identifies a VM. Once you are familiar with the pattern, you can skip this step and directly proceed to [enabling protection](#enabling-protection-for-the-azure-vm).
This operation is a *GET* operation.
The *GET* URI has all the required parameters. No additional request body is nee
|Name |Type |Description | ||||
-|200 OK | [WorkloadProtectableItemResourceList](/rest/api/backup/2021-02-10/backup-protected-items/list#workloadprotectableitemresourcelist) | OK |
+|200 OK | [WorkloadProtectableItemResourceList](/rest/api/backup/backup-protectable-items/list#workloadprotectableitemresourcelist) | OK |
#### Example responses to get operation
In the example, the above values translate to:
### Enabling protection for the Azure VM
-After the relevant VM is "cached" and "identified", select the policy to protect. To know more about existing policies in the vault, refer to [list Policy API](/rest/api/backup/2021-02-10/backup-policies/list). Then select the [relevant policy](/rest/api/backup/2021-02-10/protection-policies/get) by referring to the policy name. To create policies, refer to [create policy tutorial](backup-azure-arm-userestapi-createorupdatepolicy.md). "DefaultPolicy" is selected in the example below.
+After the relevant VM is "cached" and "identified", select the policy to protect. To know more about existing policies in the vault, refer to [list Policy API](/rest/api/backup/backup-policies/list). Then select the [relevant policy](/rest/api/backup/protection-policies/get) by referring to the policy name. To create policies, refer to [create policy tutorial](backup-azure-arm-userestapi-createorupdatepolicy.md). "DefaultPolicy" is selected in the example below.
Enabling protection is an asynchronous *PUT* operation that creates a 'protected item'.
To create a protected item, following are the components of the request body.
|||| |properties | AzureIaaSVMProtectedItem |ProtectedItem Resource properties |
-For the complete list of definitions of the request body and other details, refer to [create protected item REST API document](/rest/api/backup/2021-02-10/protected-items/create-or-update#request-body).
+For the complete list of definitions of the request body and other details, refer to [create protected item REST API document](/rest/api/backup/protected-items/create-or-update#request-body).
##### Example request body
It returns two responses: 202 (Accepted) when another operation is created and t
|Name |Type |Description | ||||
-|200 OK | [ProtectedItemResource](/rest/api/backup/2021-02-10/protected-item-operation-results/get#protecteditemresource) | OK |
+|200 OK | [ProtectedItemResource](/rest/api/backup/protected-item-operation-results/get#protecteditemresource) | OK |
|202 Accepted | | Accepted | ##### Example responses to create protected item operation
To trigger an on-demand backup, following are the components of the request body
|Name |Type |Description | ||||
-|properties | [IaaSVMBackupRequest](/rest/api/backup/2021-02-10/backups/trigger#iaasvmbackuprequest) |BackupRequestResource properties |
+|properties | [IaaSVMBackupRequest](/rest/api/backup/backups/trigger#iaasvmbackuprequest) |BackupRequestResource properties |
-For the complete list of definitions of the request body and other details, refer to [trigger backups for protected items REST API document](/rest/api/backup/2021-02-10/backups/trigger#request-body).
+For the complete list of definitions of the request body and other details, refer to [trigger backups for protected items REST API document](/rest/api/backup/backups/trigger#request-body).
#### Example request body for on-demand backup
If the Azure VM is already backed up, you can specify the list of disks to be ba
> [!IMPORTANT] > The request body above is always the final copy of data disks to be excluded or included. This doesn't *add* to the previous configuration. For example: If you first update the protection as "exclude data disk 1" and then repeat with "exclude data disk 2", *only data disk 2 is excluded* in the subsequent backups and data disk 1 will be included. This is always the final list which will be included/excluded in the subsequent backups.
-To get the current list of disks which are excluded or included, get the protected item information as mentioned [here](/rest/api/backup/2021-02-10/protected-items/get). The response will provide the list of data disk LUNs and indicates whether they are included or excluded.
+To get the current list of disks which are excluded or included, get the protected item information as mentioned [here](/rest/api/backup/protected-items/get). The response will provide the list of data disk LUNs and indicates whether they are included or excluded.
### Stop protection but retain existing data
The response will follow the same format as mentioned [for triggering an on-dema
### Stop protection and delete data
-To remove the protection on a protected VM and delete the backup data as well, perform a delete operation as detailed [here](/rest/api/backup/2021-02-10/protected-items/delete).
+To remove the protection on a protected VM and delete the backup data as well, perform a delete operation as detailed [here](/rest/api/backup/protected-items/delete).
Stopping protection and deleting data is a *DELETE* operation.
backup Backup Azure Arm Userestapi Createorupdatepolicy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-userestapi-createorupdatepolicy.md
ms.assetid: 5ffc4115-0ae5-4b85-a18c-8a942f6d4870
# Create Azure Recovery Services backup policies using REST API
-The steps to create a backup policy for an Azure Recovery Services vault are outlined in the [policy REST API document](/rest/api/backup/2021-02-10/protection-policies/create-or-update). Let's use this document as a reference to create a policy for Azure VM backup.
+The steps to create a backup policy for an Azure Recovery Services vault are outlined in the [policy REST API document](/rest/api/backup/protection-policies/create-or-update). Let's use this document as a reference to create a policy for Azure VM backup.
## Create or update a policy
For example, to create a policy for Azure VM backup, following are the component
|Name |Required |Type |Description | |||||
-|properties | True | ProtectionPolicy:[AzureIaaSVMProtectionPolicy](/rest/api/backup/2021-02-10/protection-policies/create-or-update#azureiaasvmprotectionpolicy) | ProtectionPolicyResource properties |
+|properties | True | ProtectionPolicy:[AzureIaaSVMProtectionPolicy](/rest/api/backup/protection-policies/create-or-update#azureiaasvmprotectionpolicy) | ProtectionPolicyResource properties |
|tags | | Object | Resource tags |
-For the complete list of definitions in the request body, refer to the [backup policy REST API document](/rest/api/backup/2021-02-10/protection-policies/create-or-update).
+For the complete list of definitions in the request body, refer to the [backup policy REST API document](/rest/api/backup/protection-policies/create-or-update).
### Example request body
It returns two responses: 202 (Accepted) when another operation is created, and
|Name |Type |Description | ||||
-|200 OK | [Protection PolicyResource](/rest/api/backup/2021-02-10/protection-policies/create-or-update#protectionpolicyresource) | OK |
+|200 OK | [Protection PolicyResource](/rest/api/backup/protection-policies/create-or-update#protectionpolicyresource) | OK |
|202 Accepted | | Accepted | ### Example responses
backup Backup Azure Arm Userestapi Managejobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-userestapi-managejobs.md
An operation such as triggering backup will always return a jobID. For example:
} ```
-The Azure VM backup job is identified by "jobId" field and can be tracked as mentioned [here](/rest/api/backup/2021-02-10/job-details) using a simple *GET* request.
+The Azure VM backup job is identified by "jobId" field and can be tracked as mentioned [here](/rest/api/backup/job-details) using a simple *GET* request.
## Tracking the job
The `{jobName}` is "jobId" mentioned above. The response is always 200 OK with t
|Name |Type |Description | ||||
-|200 OK | [JobResource](/rest/api/backup/2021-02-10/job-details/get#jobresource) | OK |
+|200 OK | [JobResource](/rest/api/backup/job-details/get#jobresource) | OK |
#### Example response
backup Backup Azure Arm Userestapi Restoreazurevms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-arm-userestapi-restoreazurevms.md
For any restore operation, one has to identify the relevant recovery point first
## Select Recovery point
-The available recovery points of a backup item can be listed using the [list recovery point REST API](/rest/api/backup/2021-02-10/recovery-points/list). It's a simple *GET* operation with all the relevant values.
+The available recovery points of a backup item can be listed using the [list recovery point REST API](/rest/api/backup/recovery-points/list). It's a simple *GET* operation with all the relevant values.
```http GET https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/protectedItems/{protectedItemName}/recoveryPoints?api-version=2019-05-13
The *GET* URI has all the required parameters. There's no need for an additional
|Name |Type |Description | ||||
-|200 OK | [RecoveryPointResourceList](/rest/api/backup/2021-02-10/recovery-points/list#recoverypointresourcelist) | OK |
+|200 OK | [RecoveryPointResourceList](/rest/api/backup/recovery-points/list#recoverypointresourcelist) | OK |
#### Example response
After selecting the [relevant restore point](#select-recovery-point), proceed to
> [!IMPORTANT] > All details about various restore options and their dependencies are mentioned [here](./backup-azure-arm-restore-vms.md#restore-options). Please review before proceeding to triggering these operations.
-Triggering restore operations is a *POST* request. To know more about the API, refer to the ["trigger restore" REST API](/rest/api/backup/2021-02-10/restores/trigger).
+Triggering restore operations is a *POST* request. To know more about the API, refer to the ["trigger restore" REST API](/rest/api/backup/restores/trigger).
```http POST https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/protectedItems/{protectedItemName}/recoveryPoints/{recoveryPointId}/restore?api-version=2019-05-13
To trigger a disk restore from an Azure VM backup, following are the components
|Name |Type |Description | ||||
-|properties | [IaaSVMRestoreRequest](/rest/api/backup/2021-02-10/restores/trigger#iaasvmrestorerequest) | RestoreRequestResourceProperties |
+|properties | [IaaSVMRestoreRequest](/rest/api/backup/restores/trigger#iaasvmrestorerequest) | RestoreRequestResourceProperties |
-For the complete list of definitions of the request body and other details, refer to [trigger Restore REST API document](/rest/api/backup/2021-02-10/restores/trigger#request-body).
+For the complete list of definitions of the request body and other details, refer to [trigger Restore REST API document](/rest/api/backup/restores/trigger#request-body).
##### Example request
The following request body defines properties required to trigger a disk restore
### Restore disks selectively
-If you are [selectively backing up disks](backup-azure-arm-userestapi-backupazurevms.md#excluding-disks-in-azure-vm-backup), then the current backed-up disk list is provided in the [recovery point summary](#select-recovery-point) and [detailed response](/rest/api/backup/2021-02-10/recovery-points/get). You can also selectively restore disks and more details are provided [here](selective-disk-backup-restore.md#selective-disk-restore). To selectively restore a disk among the list of backed up disks, find the LUN of the disk from the recovery point response and add the **restoreDiskLunList** property to the [request body above](#example-request) as shown below.
+If you are [selectively backing up disks](backup-azure-arm-userestapi-backupazurevms.md#excluding-disks-in-azure-vm-backup), then the current backed-up disk list is provided in the [recovery point summary](#select-recovery-point) and [detailed response](/rest/api/backup/recovery-points/get). You can also selectively restore disks and more details are provided [here](selective-disk-backup-restore.md#selective-disk-restore). To selectively restore a disk among the list of backed up disks, find the LUN of the disk from the recovery point response and add the **restoreDiskLunList** property to the [request body above](#example-request) as shown below.
```json {
To trigger a disk replacement from an Azure VM backup, following are the compone
|Name |Type |Description | ||||
-|properties | [IaaSVMRestoreRequest](/rest/api/backup/2021-02-10/restores/trigger#iaasvmrestorerequest) | RestoreRequestResourceProperties |
+|properties | [IaaSVMRestoreRequest](/rest/api/backup/restores/trigger#iaasvmrestorerequest) | RestoreRequestResourceProperties |
-For the complete list of definitions of the request body and other details, refer to [trigger Restore REST API document](/rest/api/backup/2021-02-10/restores/trigger#request-body).
+For the complete list of definitions of the request body and other details, refer to [trigger Restore REST API document](/rest/api/backup/restores/trigger#request-body).
#### Example request
backup Backup Azure File Share Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-file-share-rest-api.md
For this article, we'll use the following resources:
### Discover storage accounts with unprotected Azure file shares
-The vault needs to discover all Azure storage accounts in the subscription with file shares that can be backed up to the Recovery Services vault. This is triggered using the [refresh operation](/rest/api/backup/2021-02-10/protection-containers/refresh). It's an asynchronous *POST* operation that ensures the vault gets the latest list of all unprotected Azure File shares in the current subscription and 'caches' them. Once the file share is 'cached', Recovery services can access the file share and protect it.
+The vault needs to discover all Azure storage accounts in the subscription with file shares that can be backed up to the Recovery Services vault. This is triggered using the [refresh operation](/rest/api/backup/protection-containers/refresh). It's an asynchronous *POST* operation that ensures the vault gets the latest list of all unprotected Azure File shares in the current subscription and 'caches' them. Once the file share is 'cached', Recovery services can access the file share and protect it.
```http POST https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{vaultresourceGroupname}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/refreshContainers?api-version=2016-12-01&$filter={$filter}
Date : Mon, 27 Jan 2020 10:53:04 GMT
### Get List of storage accounts with file shares that can be backed up with Recovery Services vault
-To confirm that ΓÇ£cachingΓÇ¥ is done, list all the storage accounts in the subscription with file shares that can be backed up with the Recovery Services vault. Then locate the desired storage account in the response. This is done using the [GET ProtectableContainers](/rest/api/backup/2021-02-10/protectable-containers/list) operation.
+To confirm that ΓÇ£cachingΓÇ¥ is done, list all the storage accounts in the subscription with file shares that can be backed up with the Recovery Services vault. Then locate the desired storage account in the response. This is done using the [GET ProtectableContainers](/rest/api/backup/protectable-containers/list) operation.
```http GET https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectableContainers?api-version=2016-12-01&$filter=backupManagementType eq 'AzureStorage'
Since we can locate the *testvault2* storage account in the response body with t
### Register storage account with Recovery Services vault
-This step is only needed if you didn't register the storage account with the vault earlier. You can register the vault via the [ProtectionContainers-Register operation](/rest/api/backup/2021-02-10/protection-containers/register).
+This step is only needed if you didn't register the storage account with the vault earlier. You can register the vault via the [ProtectionContainers-Register operation](/rest/api/backup/protection-containers/register).
```http PUT https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}?api-version=2016-12-01
The create request body is as follows:
"backupManagementType": "AzureStorage" - }
+}
```
-For the complete list of definitions of the request body and other details, refer to [ProtectionContainers-Register](/rest/api/backup/2021-02-10/protection-containers/register#azurestoragecontainer).
+For the complete list of definitions of the request body and other details, refer to [ProtectionContainers-Register](/rest/api/backup/protection-containers/register#azurestoragecontainer).
This is an asynchronous operation and returns two responses: "202 Accepted" when the operation is accepted and "200 OK" when the operation is complete. To track the operation status, use the location header to get the latest status of the operation.
You can verify if the registration was successful from the value of the *registr
### Inquire all unprotected files shares under a storage account
-You can inquire about protectable items in a storage account using the [Protection Containers-Inquire](/rest/api/backup/2021-02-10/protection-containers/inquire) operation. It's an asynchronous operation and the results should be tracked using the location header.
+You can inquire about protectable items in a storage account using the [Protection Containers-Inquire](/rest/api/backup/protection-containers/inquire) operation. It's an asynchronous operation and the results should be tracked using the location header.
```http POST https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/inquire?api-version=2016-12-01
Date : Mon, 27 Jan 2020 10:53:05 GMT
### Select the file share you want to back up
-You can list all protectable items under the subscription and locate the desired file share to be backed up using the [GET backupprotectableItems](/rest/api/backup/2021-02-10/backup-protectable-items/list) operation.
+You can list all protectable items under the subscription and locate the desired file share to be backed up using the [GET backupprotectableItems](/rest/api/backup/backup-protectable-items/list) operation.
```http GET https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupProtectableItems?api-version=2016-12-01&$filter={$filter}
The response contains the list of all unprotected file shares and contains all t
### Enable backup for the file share
-After the relevant file share is "identified" with the friendly name, select the policy to protect. To learn more about existing policies in the vault, refer to [list Policy API](/rest/api/backup/2021-02-10/backup-policies/list). Then select the [relevant policy](/rest/api/backup/2021-02-10/protection-policies/get) by referring to the policy name. To create policies, refer to [create policy tutorial](./backup-azure-arm-userestapi-createorupdatepolicy.md).
+After the relevant file share is "identified" with the friendly name, select the policy to protect. To learn more about existing policies in the vault, refer to [list Policy API](/rest/api/backup/backup-policies/list). Then select the [relevant policy](/rest/api/backup/protection-policies/get) by referring to the policy name. To create policies, refer to [create policy tutorial](./backup-azure-arm-userestapi-createorupdatepolicy.md).
Enabling protection is an asynchronous *PUT* operation that creates a "protected item".
To trigger an on-demand backup, following are the components of the request body
| - | -- | | | Properties | AzurefilesharebackupReques | BackupRequestResource properties |
-For the complete list of definitions of the request body and other details, refer to [trigger backups for protected items REST API document](/rest/api/backup/2021-02-10/backups/trigger#request-body).
+For the complete list of definitions of the request body and other details, refer to [trigger backups for protected items REST API document](/rest/api/backup/backups/trigger#request-body).
Request Body example
backup Manage Azure File Share Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/manage-azure-file-share-rest-api.md
For example, the final response of a [trigger backup REST API](backup-azure-file
} ```
-The Azure file share backup job is identified by the **jobId** field and can be tracked as mentioned [here](/rest/api/backup/2021-02-10/job-details) using a GET request.
+The Azure file share backup job is identified by the **jobId** field and can be tracked as mentioned [here](/rest/api/backup/job-details) using a GET request.
### Tracking the job
You can remove protection on a protected file share but retain the data already
"properties": { "protectedItemType": "AzureFileShareProtectedItem", "sourceResourceId": "/subscriptions/ef4ab5a7-c2c0-4304-af80-af49f48af3d1/resourceGroups/AzureFiles/providers/Microsoft.Storage/storageAccounts/testvault2",
- "policyId": ΓÇ£" ,
-ΓÇ£protectionStateΓÇ¥:ΓÇ¥ProtectionStoppedΓÇ¥
+ "policyId": "" ,
+"protectionState":"ProtectionStopped"
} } ```
GET https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-af49f48af
## Stop protection and delete data
-To remove the protection on a protected file share and delete the backup data as well, perform a delete operation as detailed [here](/rest/api/backup/2021-02-10/protected-items/delete).
+To remove the protection on a protected file share and delete the backup data as well, perform a delete operation as detailed [here](/rest/api/backup/protected-items/delete).
```http DELETE https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/protectedItems/{protectedItemName}?api-version=2019-05-13
backup Restore Azure File Share Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/restore-azure-file-share-rest-api.md
For this article, we'll use the following resources:
## Fetch ContainerName and ProtectedItemName
-For most of the restore related API calls, you need to pass values for the {containerName} and {protectedItemName} URI parameters. Use the ID attribute in the response body of the [GET backupprotectableitems](/rest/api/backup/2021-02-10/protected-items/get) operation to retrieve values for these parameters. In our example, the ID of the file share we want to protect is:
+For most of the restore related API calls, you need to pass values for the {containerName} and {protectedItemName} URI parameters. Use the ID attribute in the response body of the [GET backupprotectableitems](/rest/api/backup/protected-items/get) operation to retrieve values for these parameters. In our example, the ID of the file share we want to protect is:
`"/Subscriptions/ef4ab5a7-c2c0-4304-af80-af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectionContainers/storagecontainer;storage;azurefiles;afsaccount/protectableItems/azurefileshare;azurefiles`
Set the URI values as follows:
* {protectedItemName}: *azurefileshare;azurefiles* * {ResourceGroupName}: *azurefiles*
-The GET URI has all the required parameters. There's no need for an additional request body.
+The GET URI has all the required parameters. There's no need for another request body.
```http GET https://management.azure.com/Subscriptions/ef4ab5a7-c2c0-4304-af80-af49f48af3d1/resourceGroups/azurefiles/providers/Microsoft.RecoveryServices/vaults/azurefilesvault/backupFabrics/Azure/protectionContainers/StorageContainer;storage;azurefiles;afsaccount/protectedItems/AzureFileShare;azurefiles/recoveryPoints?api-version=2019-05-13
The recovery point is identified with the {name} field in the response above.
## Full share recovery using REST API Use this restore option to restore the complete file share in the original or an alternate location.
-Triggering restore is a POST request and you can perform this operation using the [trigger restore](/rest/api/backup/2021-02-10/restores/trigger) REST API.
+Triggering restore is a POST request and you can perform this operation using the [trigger restore](/rest/api/backup/restores/trigger) REST API.
```http POST https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/protectedItems/{protectedItemName}/recoveryPoints/{recoveryPointId}/restore?api-version=2019-05-13
Name | Type | Description
| - | - Properties | AzureFileShareRestoreRequest | RestoreRequestResource properties
-For the complete list of definitions of the request body and other details, refer to the [trigger Restore REST API document](/rest/api/backup/2021-02-10/restores/trigger#request-body).
+For the complete list of definitions of the request body and other details, refer to the [trigger Restore REST API document](/rest/api/backup/restores/trigger#request-body).
### Restore to original location
Name | Type | Description
| - | - Properties | AzureFileShareRestoreRequest | RestoreRequestResource properties
-For the complete list of definitions of the request body and other details, refer to the [trigger Restore REST API document](/rest/api/backup/2021-02-10/restores/trigger#request-body).
+For the complete list of definitions of the request body and other details, refer to the [trigger Restore REST API document](/rest/api/backup/restores/trigger#request-body).
### Restore to original location for item-level recovery using REST API
backup Use Restapi Update Vault Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/use-restapi-update-vault-properties.md
So you need to carefully choose whether or not to disable soft-delete for a part
### Fetch soft delete state using REST API
-By default, the soft-delete state will be enabled for any newly created Recovery Services vault. To fetch/update the state of soft-delete for a vault, use the backup vault's config related [REST API document](/rest/api/backup/2021-02-10/backup-resource-vault-configs)
+By default, the soft-delete state will be enabled for any newly created Recovery Services vault. To fetch/update the state of soft-delete for a vault, use the backup vault's config related [REST API document](/rest/api/backup/backup-resource-vault-configs)
To fetch the current state of soft-delete for a vault, use the following *GET* operation
The successful response for the 'GET' operation is shown below:
|Name |Type |Description | ||||
-|200 OK | [BackupResourceVaultConfig](/rest/api/backup/2021-02-10/backup-resource-vault-configs/get#backupresourcevaultconfigresource) | OK |
+|200 OK | [BackupResourceVaultConfig](/rest/api/backup/backup-resource-vault-configs/get#backupresourcevaultconfigresource) | OK |
##### Example response
PUT https://management.azure.com/Subscriptions/00000000-0000-0000-0000-000000000
#### Create the request body
-THe following common definitions are used to create a request body
+The following common definitions are used to create a request body
-For more details, refer to [the REST API documentation](/rest/api/backup/2021-02-10/backup-resource-vault-configs/update#request-body)
+For more details, refer to [the REST API documentation](/rest/api/backup/backup-resource-vault-configs/update#request-body)
|Name |Required |Type |Description | |||||
The successful response for the 'PATCH' operation is shown below:
|Name |Type |Description | ||||
-|200 OK | [BackupResourceVaultConfig](/rest/api/backup/2021-02-10/backup-resource-vault-configs/get#backupresourcevaultconfigresource) | OK |
+|200 OK | [BackupResourceVaultConfig](/rest/api/backup/backup-resource-vault-configs/get#backupresourcevaultconfigresource) | OK |
##### Example response for the PATCH operation
cloud-shell Private Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cloud-shell/private-vnet.md
If you already have a desired VNET that you would like to connect to, skip this
In the Azure portal, or using Azure CLI, Azure PowerShell, etc. create a resource group and a virtual network in the new resource group, **the resource group and virtual network need to be in the same region**. ### ARM templates
-Utilize the [Azure Quickstart Template](https://aka.ms/cloudshell/docs/vnet/template) for creating Cloud Shell resources in a virtual network, and the [Azure Quickstart Template](https://aka.ms/cloudshell/docs/vnet/template/storage) for creating necessary storage. Take note of your resource names, primarily your file share name.
+Utilize the [Azure Quickstart Template](https://aka.ms/cloudshell/docs/vnet/template) for creating Cloud Shell resources in a virtual network, and the [Azure Quickstart Template](https://azure.microsoft.com/resources/templates/cloud-shell-vnet-storage/) for creating necessary storage. Take note of your resource names, primarily your file share name.
### Open relay firewall Navigate to the relay created using the above template, select "Networking" in settings, allow access from your browser network to the relay. By default the relay is only accessible from the virtual network it has been created in.
cognitive-services Storage Lab Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/Tutorials/storage-lab-tutorial.md
Previously updated : 11/23/2020 Last updated : 07/06/2021 #Customer intent: As a developer of an image-intensive web app, I want to be able to automatically generate captions and search keywords for each of my images.
In this tutorial, you'll learn how to integrate the Azure Computer Vision service into a web app to generate metadata for uploaded images. This is useful for [digital asset management (DAM)](../overview.md#computer-vision-for-digital-asset-management) scenarios, such as if a company wants to quickly generate descriptive captions or searchable keywords for all of its images.
-A full app guide can be found in the [Azure Storage and Cognitive Services Lab](https://github.com/Microsoft/computerscience/blob/master/Labs/Azure%20Services/Azure%20Storage/Azure%20Storage%20and%20Cognitive%20Services%20(MVC).md) on GitHub, and this tutorial essentially covers Exercise 5 of the lab. You may want to create the full application by following every step, but if you only want to learn how to integrate Computer Vision into an existing web app, read along here.
+You'll use Visual Studio to write an MVC Web app that accepts images uploaded by users and stores the images in Azure blob storage. You'll learn how to read and write blobs in C# and use blob metadata to attach additional information to the blobs you create. Then you'll submit each image uploaded by the user to the Computer Vision API to generate a caption and search metadata for the image. Finally, you can deploy the app to the cloud using Visual Studio.
This tutorial shows you how to: > [!div class="checklist"]
-> * Create a Computer Vision resource in Azure
-> * Perform image analysis on Azure Storage images
+> * Create a storage account and storage containers using the Azure portal
+> * Create a Web app in Visual Studio and deploy it to Azure
+> * Use the Computer Vision API to extract information from images
> * Attach metadata to Azure Storage images
-> * Check image metadata using Azure Storage Explorer
+> * Check image metadata using [Azure Storage Explorer](http://storageexplorer.com/)
+
+> [!TIP]
+> The section [Use Computer Vision to generate metadata](#Exercise5) is most relevant to Image Analysis. Skip to there if you just want to see how Image Analysis is integrated into an established application.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services) before you begin. ## Prerequisites - [Visual Studio 2017 Community edition](https://www.visualstudio.com/products/visual-studio-community-vs.aspx) or higher, with the "ASP.NET and web development" and "Azure development" workloads installed.-- An Azure Storage account with a blob container set up for image storage (follow [Exercises 1 of the Azure Storage Lab](https://github.com/Microsoft/computerscience/blob/master/Labs/Azure%20Services/Azure%20Storage/Azure%20Storage%20and%20Cognitive%20Services%20(MVC).md#Exercise1) if you need help with this step).-- The Azure Storage Explorer tool (follow [Exercise 2 of the Azure Storage Lab](https://github.com/Microsoft/computerscience/blob/master/Labs/Azure%20Services/Azure%20Storage/Azure%20Storage%20and%20Cognitive%20Services%20(MVC).md#Exercise2) if you need help with this step).-- An ASP.NET web application with access to Azure Storage (follow [Exercise 3 of the Azure Storage Lab](https://github.com/Microsoft/computerscience/blob/master/Labs/Azure%20Services/Azure%20Storage/Azure%20Storage%20and%20Cognitive%20Services%20(MVC).md#Exercise3) to create such an app quickly).
+- The [Azure Storage Explorer](http://storageexplorer.com/) tool installed.
+
+<a name="Exercise1"></a>
+## Create a storage account
+
+In this section, you'll use the [Azure portal](https://portal.azure.com?WT.mc_id=academiccontent-github-cxa) to create a storage account. Then you'll create a pair of containers: one to store images uploaded by the user, and another to store image thumbnails generated from the uploaded images.
+
+1. Open the [Azure portal](https://portal.azure.com?WT.mc_id=academiccontent-github-cxa) in your browser. If you're asked to sign in, do so using your Microsoft account.
+1. To create a storage account, click **+ Create a resource** in the ribbon on the left. Then click **Storage**, followed by **Storage account**.
+
+ ![Creating a storage account](Images/new-storage-account.png)
+1. Enter a unique name for the storage account in **Name** field and make sure a green check mark appears next to it. The name is important, because it forms one part of the URL through which blobs created under this account are accessed. Place the storage account in a new resource group named "IntellipixResources," and select the region nearest you. Finish by clicking the **Review + create** button at the bottom of the screen to create the new storage account.
+ > [!NOTE]
+ > Storage account names can be 3 to 24 characters in length and can only contain numbers and lowercase letters. In addition, the name you enter must be unique within Azure. If someone else has chosen the same name, you'll be notified that the name isn't available with a red exclamation mark in the **Name** field.
+
+ ![Specifying parameters for a new storage account](Images/create-storage-account.png)
+1. Click **Resource groups** in the ribbon on the left. Then click the "IntellipixResources" resource group.
+
+ ![Opening the resource group](Images/open-resource-group.png)
+1. In the tab that opens for the resource group, click the storage account you created. If the storage account isn't there yet, you can click **Refresh** at the top of the tab until it appears.
+
+ ![Opening the new storage account](Images/open-storage-account.png)
+1. In the tab for the storage account, click **Blobs** to view a list of containers associated with this account.
+
+ ![Viewing blobs button](Images/view-containers.png)
+
+1. The storage account currently has no containers. Before you can create a blob, you must create a container to store it in. Click **+ Container** to create a new container. Type `photos` into the **Name** field and select **Blob** as the **Public access level**. Then click **OK** to create a container named "photos."
+
+ > By default, containers and their contents are private. Selecting **Blob** as the access level makes the blobs in the "photos" container publicly accessible, but doesn't make the container itself public. This is what you want because the images stored in the "photos" container will be linked to from a Web app.
+
+ ![Creating a "photos" container](Images/create-photos-container.png)
+
+1. Repeat the previous step to create a container named "thumbnails," once more ensuring that the container's **Public access level** is set to **Blob**.
+1. Confirm that both containers appear in the list of containers for this storage account, and that the names are spelled correctly.
+
+ ![The new containers](Images/new-containers.png)
+
+1. Close the "Blob service" screen. Click **Access keys** in the menu on the left side of the storage-account screen, and then click the **Copy** button next to **KEY** for **key1**. Paste this access key into your favorite text editor for later use.
+
+ ![Copying the access key](Images/copy-storage-account-access-key.png)
+
+You've now created a storage account to hold images uploaded to the app you're going to build, and containers to store the images in.
+
+<a name="Exercise2"></a>
+## Run Azure Storage Explorer
+
+[Azure Storage Explorer](http://storageexplorer.com/) is a free tool that provides a graphical interface for working with Azure Storage on PCs running Windows, macOS, and Linux. It provides most of the same functionality as the Azure portal and offers other features like the ability to view blob metadata. In this section, you'll use the Microsoft Azure Storage Explorer to view the containers you created in the previous section.
+
+1. If you haven't installed Storage Explorer or would like to make sure you're running the latest version, go to http://storageexplorer.com/ and download and install it.
+1. Start Storage Explorer. If you're asked to sign in, do so using your Microsoft account&mdash;the same one that you used to sign in to the Azure portal. If you don't see the storage account in Storage Explorer's left pane, click the **Manage Accounts** button highlighted below and make sure both your Microsoft account and the subscription used to create the storage account have been added to Storage Explorer.
+
+ ![Managing accounts in Storage Explorer](Images/add-account.png)
+
+1. Click the small arrow next to the storage account to display its contents, and then click the arrow next to **Blob Containers**. Confirm that the containers you created appear in the list.
+
+ ![Viewing blob containers](Images/storage-explorer.png)
+
+The containers are currently empty, but that will change once your app is deployed and you start uploading photos. Having Storage Explorer installed will make it easy for you to see what your app writes to blob storage.
+
+<a name="Exercise3"></a>
+## Create a new Web app in Visual Studio
+
+In this section, you'll create a new Web app in Visual Studio and add code to implement the basic functionality required to upload images, write them to blob storage, and display them in a Web page.
+
+1. Start Visual Studio and use the **File -> New -> Project** command to create a new Visual C# **ASP.NET Web Application** project named "Intellipix" (short for "Intelligent Pictures").
+
+ ![Creating a new Web Application project](Images/new-web-app.png)
+
+1. In the "New ASP.NET Web Application" dialog, make sure **MVC** is selected. Then click **OK**.
+
+ ![Creating a new ASP.NET MVC project](Images/new-mvc-project.png)
+
+1. Take a moment to review the project structure in Solution Explorer. Among other things, there's a folder named **Controllers** that holds the project's MVC controllers, and a folder named **Views** that holds the project's views. You'll be working with assets in these folders and others as you implement the application.
+
+ ![The project shown in Solution Explorer](Images/project-structure.png)
+
+1. Use Visual Studio's **Debug -> Start Without Debugging** command (or press **Ctrl+F5**) to launch the application in your browser. Here's how the application looks in its present state:
+
+ ![The initial application](Images/initial-application.png)
+
+1. Close the browser and return to Visual Studio. In Solution Explorer, right-click the **Intellipix** project and select **Manage NuGet Packages...**. Click **Browse**. Then type `imageresizer` into the search box and select the NuGet package named **ImageResizer**. Finally, click **Install** to install the latest stable version of the package. ImageResizer contains APIs that you'll use to create image thumbnails from the images uploaded to the app. OK any changes and accept any licenses presented to you.
+
+ ![Installing ImageResizer](Images/install-image-resizer.png)
+
+1. Repeat this process to add the NuGet package named **WindowsAzure.Storage** to the project. This package contains APIs for accessing Azure Storage from .NET applications. OK any changes and accept any licenses presented to you.
+
+ ![Installing WindowsAzure.Storage](Images/install-storage-package.png)
+
+1. Open _Web.config_ and add the following statement to the ```<appSettings>``` section, replacing ACCOUNT_NAME with the name of the storage account you created in the first section, and ACCOUNT_KEY with the access key you saved.
+
+ ```xml
+ <add key="StorageConnectionString" value="DefaultEndpointsProtocol=https;AccountName=ACCOUNT_NAME;AccountKey=ACCOUNT_KEY" />
+ ```
+
+1. Open the file named *_Layout.cshtml* in the project's **Views/Shared** folder. On line 19, change "Application name" to "Intellipix." The line should look like this:
+
+ ```C#
+ @Html.ActionLink("Intellipix", "Index", "Home", new { area = "" }, new { @class = "navbar-brand" })
+ ```
+
+ > [!NOTE]
+ > In an ASP.NET MVC project, *_Layout.cshtml* is a special view that serves as a template for other views. You typically define header and footer content that is common to all views in this file.
+
+1. Right-click the project's **Models** folder and use the **Add -> Class...** command to add a class file named *BlobInfo.cs* to the folder. Then replace the empty **BlobInfo** class with the following class definition:
+
+ ```C#
+ public class BlobInfo
+ {
+ public string ImageUri { get; set; }
+ public string ThumbnailUri { get; set; }
+ public string Caption { get; set; }
+ }
+ ```
+
+1. Open *HomeController.cs* from the project's **Controllers** folder and add the following `using` statements to the top of the file:
+
+ ```C#
+ using ImageResizer;
+ using Intellipix.Models;
+ using Microsoft.WindowsAzure.Storage;
+ using Microsoft.WindowsAzure.Storage.Blob;
+ using System.Configuration;
+ using System.Threading.Tasks;
+ using System.IO;
+ ```
+
+1. Replace the **Index** method in *HomeController.cs* with the following implementation:
+
+ ```C#
+ public ActionResult Index()
+ {
+ // Pass a list of blob URIs in ViewBag
+ CloudStorageAccount account = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["StorageConnectionString"]);
+ CloudBlobClient client = account.CreateCloudBlobClient();
+ CloudBlobContainer container = client.GetContainerReference("photos");
+ List<BlobInfo> blobs = new List<BlobInfo>();
+
+ foreach (IListBlobItem item in container.ListBlobs())
+ {
+ var blob = item as CloudBlockBlob;
+
+ if (blob != null)
+ {
+ blobs.Add(new BlobInfo()
+ {
+ ImageUri = blob.Uri.ToString(),
+ ThumbnailUri = blob.Uri.ToString().Replace("/photos/", "/thumbnails/")
+ });
+ }
+ }
+
+ ViewBag.Blobs = blobs.ToArray();
+ return View();
+ }
+ ```
+
+ The new **Index** method enumerates the blobs in the `"photos"` container and passes an array of **BlobInfo** objects representing those blobs to the view through ASP.NET MVC's **ViewBag** property. Later, you'll modify the view to enumerate these objects and display a collection of photo thumbnails. The classes you'll use to access your storage account and enumerate the blobs&mdash;**[CloudStorageAccount](https://docs.microsoft.com/dotnet/api/microsoft.windowsazure.storage.cloudstorageaccount?WT.mc_id=academiccontent-github-cxa)**, **[CloudBlobClient](https://docs.microsoft.com/dotnet/api/microsoft.windowsazure.storage.blob.cloudblobclient?WT.mc_id=academiccontent-github-cxa)**, and **[CloudBlobContainer](https://docs.microsoft.com/dotnet/api/microsoft.windowsazure.storage.blob.cloudblobcontainer?WT.mc_id=academiccontent-github-cxa)**&mdash;come from the **WindowsAzure.Storage** package you installed through NuGet.
+
+1. Add the following method to the **HomeController** class in *HomeController.cs*:
+
+ ```C#
+ [HttpPost]
+ public async Task<ActionResult> Upload(HttpPostedFileBase file)
+ {
+ if (file != null && file.ContentLength > 0)
+ {
+ // Make sure the user selected an image file
+ if (!file.ContentType.StartsWith("image"))
+ {
+ TempData["Message"] = "Only image files may be uploaded";
+ }
+ else
+ {
+ try
+ {
+ // Save the original image in the "photos" container
+ CloudStorageAccount account = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["StorageConnectionString"]);
+ CloudBlobClient client = account.CreateCloudBlobClient();
+ CloudBlobContainer container = client.GetContainerReference("photos");
+ CloudBlockBlob photo = container.GetBlockBlobReference(Path.GetFileName(file.FileName));
+ await photo.UploadFromStreamAsync(file.InputStream);
+
+ // Generate a thumbnail and save it in the "thumbnails" container
+ using (var outputStream = new MemoryStream())
+ {
+ file.InputStream.Seek(0L, SeekOrigin.Begin);
+ var settings = new ResizeSettings { MaxWidth = 192 };
+ ImageBuilder.Current.Build(file.InputStream, outputStream, settings);
+ outputStream.Seek(0L, SeekOrigin.Begin);
+ container = client.GetContainerReference("thumbnails");
+ CloudBlockBlob thumbnail = container.GetBlockBlobReference(Path.GetFileName(file.FileName));
+ await thumbnail.UploadFromStreamAsync(outputStream);
+ }
+ }
+ catch (Exception ex)
+ {
+ // In case something goes wrong
+ TempData["Message"] = ex.Message;
+ }
+ }
+ }
+
+ return RedirectToAction("Index");
+ }
+ ```
+
+ This is the method that's called when you upload a photo. It stores each uploaded image as a blob in the `"photos"` container, creates a thumbnail image from the original image using the `ImageResizer` package, and stores the thumbnail image as a blob in the `"thumbnails"` container.
+
+1. Open *Index.cshmtl* in the project's **Views/Home** folder and replace its contents with the following code and markup:
+
+ ```HTML
+ @{
+ ViewBag.Title = "Intellipix Home Page";
+ }
+
+ @using Intellipix.Models
+
+ <div class="container" style="padding-top: 24px">
+ <div class="row">
+ <div class="col-sm-8">
+ @using (Html.BeginForm("Upload", "Home", FormMethod.Post, new { enctype = "multipart/form-data" }))
+ {
+ <input type="file" name="file" id="upload" style="display: none" onchange="$('#submit').click();" />
+ <input type="button" value="Upload a Photo" class="btn btn-primary btn-lg" onclick="$('#upload').click();" />
+ <input type="submit" id="submit" style="display: none" />
+ }
+ </div>
+ <div class="col-sm-4 pull-right">
+ </div>
+ </div>
+
+ <hr />
+
+ <div class="row">
+ <div class="col-sm-12">
+ @foreach (BlobInfo blob in ViewBag.Blobs)
+ {
+ <img src="@blob.ThumbnailUri" width="192" title="@blob.Caption" style="padding-right: 16px; padding-bottom: 16px" />
+ }
+ </div>
+ </div>
+ </div>
+
+ @section scripts
+ {
+ <script type="text/javascript" language="javascript">
+ if ("@TempData["Message"]" !== "") {
+ alert("@TempData["Message"]");
+ }
+ </script>
+ }
+ ```
+
+ The language used here is [Razor](http://www.asp.net/web-pages/overview/getting-started/introducing-razor-syntax-c), which lets you embed executable code in HTML markup. The ```@foreach``` statement in the middle of the file enumerates the **BlobInfo** objects passed from the controller in **ViewBag** and creates HTML ```<img>``` elements from them. The ```src``` property of each element is initialized with the URI of the blob containing the image thumbnail.
+
+1. Download and unzip the _photos.zip_ file from the [GitHub sample data repository](https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/ComputerVision/storage-lab-tutorial). This is an assortment of different photos you can use to test the app.
+
+1. Save your changes and press **Ctrl+F5** to launch the application in your browser. Then click **Upload a Photo** and upload one of the images you downloaded. Confirm that a thumbnail version of the photo appears on the page.
+
+ ![Intellipix with one photo uploaded](Images/one-photo-uploaded.png)
+
+1. Upload a few more images from your **photos** folder. Confirm that they appear on the page, too:
+
+ ![Intellipix with three photos uploaded](Images/three-photos-uploaded.png)
+
+1. Right-click in your browser and select **View page source** to view the source code for the page. Find the ```<img>``` elements representing the image thumbnails. Observe that the URLs assigned to the images refer directly to blobs in blob storage. This is because you set the containers' **Public access level** to **Blob**, which makes the blobs inside publicly accessible.
+
+1. Return to Azure Storage Explorer (or restart it if you didn't leave it running) and select the `"photos"` container under your storage account. The number of blobs in the container should equal the number of photos you uploaded. Double-click one of the blobs to download it and see the image stored in the blob.
+
+ ![Contents of the "photos" container](Images/photos-container.png)
+
+1. Open the `"thumbnails"` container in Storage Explorer. Open one of the blobs to view the thumbnail images generated from the image uploads.
+
+The app doesn't yet offer a way to view the original images that you uploaded. Ideally, clicking an image thumbnail should display the original image. You'll add that feature next.
+
+<a name="Exercise4"></a>
+## Add a lightbox for viewing photos
+
+In this section, you'll use a free, open-source JavaScript library to add a lightbox viewer that enables users to see the original images they've uploaded (rather than just the image thumbnails). The files are provided for you. All you have to do is integrate them into the project and make a minor modification to *Index.cshtml*.
+
+1. Download the _lightbox.css_ and _lightbox.js_ files from the [GitHub code repository](https://github.com/Azure-Samples/cognitive-services-quickstart-code/tree/master/dotnet/ComputerVision/storage-lab-tutorial).
+1. In Solution Explorer, right-click your project's **Scripts** folder and use the **Add -> New Item...** command to create a *lightbox.js* file. Paste in the contents from the example file in the [GitHub code repository](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/storage-lab-tutorial/scripts/lightbox.js).
+
+1. Right-click the project's "Content" folder and use the **Add -> New Item...** command create a *lightbox.css* file. Paste in the contents from the example file in the [GitHub code repository](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/ComputerVision/storage-lab-tutorial/css/lightbox.css).
+1. Download and unzip the _buttons.zip_ file from the GitHub data files repository: https://github.com/Azure-Samples/cognitive-services-sample-data-files/tree/master/ComputerVision/storage-lab-tutorial. You should have four button images.
+
+1. Right-click the Intellipix project in Solution Explorer and use the **Add -> New Folder** command to add a folder named "Images" to the project.
+
+1. Right-click the **Images** folder and use the **Add -> Existing Item...** command to import the four images you downloaded.
+
+1. Open *BundleConfig.cs* in the project's "App_Start" folder. Add the following statement to the ```RegisterBundles``` method in **BundleConfig.cs**:
+
+ ```C#
+ bundles.Add(new ScriptBundle("~/bundles/lightbox").Include(
+ "~/Scripts/lightbox.js"));
+ ```
+
+1. In the same method, find the statement that creates a ```StyleBundle``` from "~/Content/css" and add *lightbox.css* to the list of style sheets in the bundle. Here is the modified statement:
+
+ ```C#
+ bundles.Add(new StyleBundle("~/Content/css").Include(
+ "~/Content/bootstrap.css",
+ "~/Content/site.css",
+ "~/Content/lightbox.css"));
+ ```
+
+1. Open *_Layout.cshtml* in the project's **Views/Shared** folder and add the following statement just before the ```@RenderSection``` statement near the bottom:
+
+ ```C#
+ @Scripts.Render("~/bundles/lightbox")
+ ```
+
+1. The final task is to incorporate the lightbox viewer into the home page. To do that, open *Index.cshtml* (it's in the project's **Views/Home** folder) and replace the ```@foreach``` loop with this one:
+
+ ```HTML
+ @foreach (BlobInfo blob in ViewBag.Blobs)
+ {
+ <a href="@blob.ImageUri" rel="lightbox" title="@blob.Caption">
+ <img src="@blob.ThumbnailUri" width="192" title="@blob.Caption" style="padding-right: 16px; padding-bottom: 16px" />
+ </a>
+ }
+ ```
-## Create a Computer Vision resource
+1. Save your changes and press **Ctrl+F5** to launch the application in your browser. Then click one of the images you uploaded earlier. Confirm that a lightbox appears and shows an enlarged view of the image.
+
+ ![An enlarged image](Images/lightbox-image.png)
+
+1. Click the **X** in the lower-right corner of the lightbox to dismiss it.
+
+Now you have a way to view the images you uploaded. The next step is to do more with those images.
+
+<a name="Exercise5"></a>
+## Use Computer Vision to generate metadata
+
+### Create a Computer Vision resource
You'll need to create a Computer Vision resource for your Azure account; this resource manages your access to Azure's Computer Vision service. 1. Follow the instructions in [Create an Azure Cognitive Services resource](../../cognitive-services-apis-create-account.md) to create a Computer Vision resource.
-1. Then go to the menu for your resource group and click the Computer Vision API subscription that you just created. Copy the URL under **Endpoint** to somewhere you can easily retrieve it in a moment. Then click **Show access keys**.
+1. Then go to the menu for your resource group and click the Computer Vision API subscription that you created. Copy the URL under **Endpoint** to somewhere you can easily retrieve it in a moment. Then click **Show access keys**.
- ![Azure portal page with the endpoint URL and access keys link outlined](../Images/copy-vision-endpoint.png)
+ ![Azure portal page with the endpoint URL and access keys link outlined](Images/copy-vision-endpoint.png)
[!INCLUDE [Custom subdomains notice](../../../../includes/cognitive-services-custom-subdomains-note.md)] 1. In the next window, copy the value of **KEY 1** to the clipboard.
- ![Manage keys dialog, with the copy button outlined](../Images/copy-vision-key.png)
+ ![Manage keys dialog, with the copy button outlined](Images/copy-vision-key.png)
-## Add Computer Vision credentials
+### Add Computer Vision credentials
Next, you'll add the required credentials to your app so that it can access Computer Vision resources.
-Open your ASP.NET web application in Visual Studio and navigate to the **Web.config** file at the root of the project. Add the following statements to the `<appSettings>` section of the file, replacing `VISION_KEY` with the key you copied in the previous step, and `VISION_ENDPOINT` with the URL you saved in the step before.
+Navigate to the *Web.config* file at the root of the project. Add the following statements to the `<appSettings>` section of the file, replacing `VISION_KEY` with the key you copied in the previous step, and `VISION_ENDPOINT` with the URL you saved in the step before.
```xml <add key="SubscriptionKey" value="VISION_KEY" />
Open your ASP.NET web application in Visual Studio and navigate to the **Web.con
Then in the Solution Explorer, right-click the project and use the **Manage NuGet Packages** command to install the package **Microsoft.Azure.CognitiveServices.Vision.ComputerVision**. This package contains the types needed to call the Computer Vision API.
-## Add metadata generation code
+### Add metadata generation code
-Next, you'll add the code that actually leverages the Computer Vision service to create metadata for images. These steps will apply to the ASP.NET app in the lab, but you can adapt them to your own app. What's important is that at this point you have an ASP.NET web application that can upload images to an Azure Storage container, read images from it, and display them in the view. If you're unsure about this step, it's best to follow [Exercise 3 of the Azure Storage Lab](https://github.com/Microsoft/computerscience/blob/master/Labs/Azure%20Services/Azure%20Storage/Azure%20Storage%20and%20Cognitive%20Services%20(MVC).md#Exercise3).
+Next, you'll add the code that actually uses the Computer Vision service to create metadata for images.
1. Open the *HomeController.cs* file in the project's **Controllers** folder and add the following `using` statements at the top of the file:
Next, you'll add the code that actually leverages the Computer Vision service to
new System.Net.Http.DelegatingHandler[] { }); vision.Endpoint = ConfigurationManager.AppSettings["VisionEndpoint"];
- VisualFeatureTypes[] features = new VisualFeatureTypes[] { VisualFeatureTypes.Description };
+ List<VisualFeatureTypes?> features = new List<VisualFeatureTypes?>() { VisualFeatureTypes.Description };
var result = await vision.AnalyzeImageAsync(photo.Uri.ToString(), features); // Record the image description and tags in blob metadata
Next, you'll add the code that actually leverages the Computer Vision service to
} ```
-## Test the app
+### Test the app
+
+Save your changes in Visual Studio and press **Ctrl+F5** to launch the application in your browser. Use the app to upload a few more images, either from the photo set you downloaded or from your own folder. When you hover the cursor over one of the new images in the view, a tooltip window should appear and display the computer-generated caption for the image.
+
+![The computer-generated caption](Images/thumbnail-with-tooltip.png)
+
+To view all of the attached metadata, use Azure Storage Explorer to view the storage container you're using for images. Right-click any of the blobs in the container and select **Properties**. In the dialog, you'll see a list of key-value pairs. The computer-generated image description is stored in the item `Caption`, and the search keywords are stored in `Tag0`, `Tag1`, and so on. When you're finished, click **Cancel** to close the dialog.
+
+![Image properties dialog window, with metadata tags listed](Images/blob-metadata.png)
+
+<a name="Exercise6"></a>
+## Add search to the app
+
+In this section, you will add a search box to the home page, enabling users to do keyword searches on the images that they've uploaded. The keywords are the ones generated by the Computer Vision API and stored in blob metadata.
+
+1. Open *Index.cshtml* in the project's **Views/Home** folder and add the following statements to the empty ```<div>``` element with the ```class="col-sm-4 pull-right"``` attribute:
+
+ ```HTML
+ @using (Html.BeginForm("Search", "Home", FormMethod.Post, new { enctype = "multipart/form-data", @class = "navbar-form" }))
+ {
+ <div class="input-group">
+ <input type="text" class="form-control" placeholder="Search photos" name="term" value="@ViewBag.Search" style="max-width: 800px">
+ <span class="input-group-btn">
+ <button class="btn btn-primary" type="submit">
+ <i class="glyphicon glyphicon-search"></i>
+ </button>
+ </span>
+ </div>
+ }
+ ```
+
+ This code and markup adds a search box and a **Search** button to the home page.
+
+1. Open *HomeController.cs* in the project's **Controllers** folder and add the following method to the **HomeController** class:
+
+ ```C#
+ [HttpPost]
+ public ActionResult Search(string term)
+ {
+ return RedirectToAction("Index", new { id = term });
+ }
+ ```
+
+ This is the method that's called when the user clicks the **Search** button added in the previous step. It refreshes the page and includes a search parameter in the URL.
+
+1. Replace the **Index** method with the following implementation:
+
+ ```C#
+ public ActionResult Index(string id)
+ {
+ // Pass a list of blob URIs and captions in ViewBag
+ CloudStorageAccount account = CloudStorageAccount.Parse(ConfigurationManager.AppSettings["StorageConnectionString"]);
+ CloudBlobClient client = account.CreateCloudBlobClient();
+ CloudBlobContainer container = client.GetContainerReference("photos");
+ List<BlobInfo> blobs = new List<BlobInfo>();
+
+ foreach (IListBlobItem item in container.ListBlobs())
+ {
+ var blob = item as CloudBlockBlob;
+
+ if (blob != null)
+ {
+ blob.FetchAttributes(); // Get blob metadata
+
+ if (String.IsNullOrEmpty(id) || HasMatchingMetadata(blob, id))
+ {
+ var caption = blob.Metadata.ContainsKey("Caption") ? blob.Metadata["Caption"] : blob.Name;
+
+ blobs.Add(new BlobInfo()
+ {
+ ImageUri = blob.Uri.ToString(),
+ ThumbnailUri = blob.Uri.ToString().Replace("/photos/", "/thumbnails/"),
+ Caption = caption
+ });
+ }
+ }
+ }
+
+ ViewBag.Blobs = blobs.ToArray();
+ ViewBag.Search = id; // Prevent search box from losing its content
+ return View();
+ }
+ ```
+
+ Observe that the **Index** method now accepts a parameter _id_ that contains the value the user typed into the search box. An empty or missing _id_ parameter indicates that all the photos should be displayed.
+
+1. Add the following helper method to the **HomeController** class:
+
+ ```C#
+ private bool HasMatchingMetadata(CloudBlockBlob blob, string term)
+ {
+ foreach (var item in blob.Metadata)
+ {
+ if (item.Key.StartsWith("Tag") && item.Value.Equals(term, StringComparison.InvariantCultureIgnoreCase))
+ return true;
+ }
+
+ return false;
+ }
+ ```
+
+ This method is called by the **Index** method to determine whether the metadata keywords attached to a given image blob contain the search term that the user entered.
+
+1. Launch the application again and upload several photos. Feel free to use your own photos, not just the ones provided with the tutorial.
+
+1. Type a keyword such as "river" into the search box. Then click the **Search** button.
+
+ ![Performing a search](Images/enter-search-term.png)
+
+1. Search results will vary depending on what you typed and what images you uploaded. But the result should be a filtered list of images whose metadata keywords include all or part of the keyword that you typed.
+
+ ![Search results](Images/search-results.png)
+
+1. Click the browser's back button to display all of the images again.
+
+You're almost finished. It's time to deploy the app to the cloud.
+
+<a name="Exercise7"></a>
+## Deploy the app to Azure
+
+In this section, you'll deploy the app to Azure from Visual Studio. You will allow Visual Studio to create an Azure Web App for you, preventing you from having to go into the Azure portal and create it separately.
+
+1. Right-click the project in Solution Explorer and select **Publish...** from the context menu. Make sure **Microsoft Azure App Service** and **Create New** are selected, and then click the **Publish** button.
+
+ ![Publishing the app](Images/publish-1.png)
+
+1. In the next dialog, select the "IntellipixResources" resource group under **Resource Group**. Click the **New...** button next to "App Service Plan" and create a new App Service Plan in the same location you selected for the storage account in [Create a storage account](#Exercise1), accepting the defaults everywhere else. Finish by clicking the **Create** button.
-Save your changes in Visual Studio and press **Ctrl+F5** to launch the application in your browser. Use the app to upload a few images, either from the "photos" folder in the lab's resources or from your own folder. When you hover the cursor over one of the images in the view, a tooltip window should appear and display the computer-generated caption for the image.
+ ![Creating an Azure Web App](Images/publish-2.png)
-![The computer-generated caption](../Images/thumbnail-with-tooltip.png)
+1. After a few moments, the app will appear in a browser window. Note the URL in the address bar. The app is no longer running locally; it's on the Web, where it's publicly reachable.
-To view all of the attached metadata, use the Azure Storage Explorer to view the storage container you're using for images. Right-click any of the blobs in the container and select **Properties**. In the dialog, you'll see a list of key-value pairs. The computer-generated image description is stored in the item "Caption," and the search keywords are stored in "Tag0," "Tag1," and so on. When you're finished, click **Cancel** to close the dialog.
+ ![The finished product!](Images/vs-intellipix.png)
-![Image properties dialog window, with metadata tags listed](../Images/blob-metadata.png)
+If you make changes to the app and want to push the changes out to the Web, go through the publish process again. You can still test your changes locally before publishing to the Web.
## Clean up resources If you'd like to keep working on your web app, see the [Next steps](#next-steps) section. If you don't plan to continue using this application, you should delete all app-specific resources. To do delete resources, you can delete the resource group that contains your Azure Storage subscription and Computer Vision resource. This will remove the storage account, the blobs uploaded to it, and the App Service resource needed to connect with the ASP.NET web app.
-To delete the resource group, open the **Resource groups** tab in the portal, navigate to the resource group you used for this project, and click **Delete resource group** at the top of the view. You'll be asked to type the resource group's name to confirm you want to delete it, because once deleted, a resource group can't be recovered.
+To delete the resource group, open the **Resource groups** tab in the portal, navigate to the resource group you used for this project, and click **Delete resource group** at the top of the view. You'll be asked to type the resource group's name to confirm you want to delete it. Once deleted, a resource group can't be recovered.
## Next steps
-In this tutorial, you set up Azure's Computer Vision service in an existing web app to automatically generate captions and keywords for blob images as they're uploaded. Next, refer to the Azure Storage Lab, Exercise 6, to learn how to add search functionality to your web app. This takes advantage of the search keywords that the Computer Vision service generates.
+There's much more that you could do to use Azure and develop your Intellipix app even further. For example, you could add support for authenticating users and deleting photos, and rather than force the user to wait for Cognitive Services to process a photo following an upload, you could use [Azure Functions](https://azure.microsoft.com/services/functions/?WT.mc_id=academiccontent-github-cxa) to call the Computer Vision API asynchronously each time an image is added to blob storage. You could also do any number of other Image Analysis operations on the image, outlined in the overview.
> [!div class="nextstepaction"]
-> [Add search to your app](https://github.com/Microsoft/computerscience/blob/master/Labs/Azure%20Services/Azure%20Storage/Azure%20Storage%20and%20Cognitive%20Services%20(MVC).md#Exercise6)
+> [Image Analysis overview](../overview-image-analysis.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/overview.md
Azure's Computer Vision service gives you access to advanced algorithms that pro
| Service|Description| |||
-| <div class="nextstepaction"> [Optical Character Recognition (OCR)](overview-ocr.md)</div>|The Optical Character Recognition (OCR) service extracts text from images. You can use the new Read API to extract printed and handwritten text from photos and documents. It uses deep-learning-based models and works with text on a variety of surfaces and backgrounds. These include business documents, invoices, receipts, posters, business cards, letters, and whiteboards. The OCR APIs support extracting printed text in [several languages](./language-support.md). Follow the [OCR quickstart](quickstarts-sdk/client-library.md) to get started.|
-|<div class="nextstepaction">[Image Analysis](overview-image-analysis.md)</div>| The Image Analysis service extracts many visual features from images, such as objects, faces, adult content, and auto-generated text descriptions. Follow the [Image Analysis quickstart](quickstarts-sdk/image-analysis-client-library.md) to get started.|
-| <div class="nextstepaction">[Spatial Analysis](intro-to-spatial-analysis-public-preview.md)</div>| The Spatial Analysis service analyzes the presence and movement of people on a video feed and produces events that other systems can respond to. Install the [Spatial Analysis container](spatial-analysis-container.md) to get started.|
+| [Optical Character Recognition (OCR)](overview-ocr.md)|The Optical Character Recognition (OCR) service extracts text from images. You can use the new Read API to extract printed and handwritten text from photos and documents. It uses deep-learning-based models and works with text on a variety of surfaces and backgrounds. These include business documents, invoices, receipts, posters, business cards, letters, and whiteboards. The OCR APIs support extracting printed text in [several languages](./language-support.md). Follow the [OCR quickstart](quickstarts-sdk/client-library.md) to get started.|
+|[Image Analysis](overview-image-analysis.md)| The Image Analysis service extracts many visual features from images, such as objects, faces, adult content, and auto-generated text descriptions. Follow the [Image Analysis quickstart](quickstarts-sdk/image-analysis-client-library.md) to get started.|
+| [Spatial Analysis](intro-to-spatial-analysis-public-preview.md)| The Spatial Analysis service analyzes the presence and movement of people on a video feed and produces events that other systems can respond to. Install the [Spatial Analysis container](spatial-analysis-container.md) to get started.|
## Computer Vision for digital asset management
cognitive-services Spatial Analysis Web App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/spatial-analysis-web-app.md
This app will showcase below scenarios:
## Deploy the Spatial Analysis container
-Fill out the [request application](https://aka.ms/csgate) to get access to run the container.
- Follow [the Host Computer Setup](./spatial-analysis-container.md) to configure the host computer and connect an IoT Edge device to Azure IoT Hub. ### Deploy an Azure IoT Hub service in your subscription
cognitive-services Speech Container Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-configuration.md
Previously updated : 08/31/2020 Last updated : 07/22/2021
Use bind mounts to read and write data to and from the container. You can specif
The Standard Speech containers don't use input or output mounts to store training or service data. However, custom speech containers rely on volume mounts.
-The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](speech-container-howto.md#the-host-computer)'s mount location may not be accessible due to a conflict between permissions used by the docker service account and the host mount location permissions.
+The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](speech-container-howto.md#host-computer-requirements-and-recommendations)'s mount location may not be accessible due to a conflict between permissions used by the docker service account and the host mount location permissions.
| Optional | Name | Data type | Description | | -- | - | | -- |
cognitive-services Speech Container Howto On Premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-howto-on-premises.md
Previously updated : 10/30/2020 Last updated : 07/22/2021
For more details on installing applications with Helm in Azure Kubernetes Servic
[ms-helm-hub-speech-chart]: https://hub.helm.sh/charts/microsoft/cognitive-services-speech-onpremise <!-- LINKS - internal -->
-[speech-container-host-computer]: speech-container-howto.md#the-host-computer
+[speech-container-host-computer]: speech-container-howto.md#host-computer-requirements-and-recommendations
[installing-helm-apps-in-aks]: ../../aks/kubernetes-helm.md [cog-svcs-containers]: ../cognitive-services-container-support.md
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/speech-container-howto.md
Previously updated : 03/02/2021 Last updated : 07/22/2021 keywords: on-premises, Docker, container
Speech containers enable customers to build a speech application architecture th
> [!IMPORTANT]
-> The following Speech containers are now Generally available:
-> * Standard Speech-to-text
-> * Custom Speech-to-text
-> * Standard Text-to-speech
-> * Neural Text-to-speech
->
-> The following speech containers are in gated preview.
-> * Speech Language Identification
->
> To use the speech containers you must submit an online request, and have it approved. See the **Request approval to the run the container** section below for more information.
-| Container | Features | Latest |
-|--|--|--|
-| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 2.12.0 |
-| Custom Speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 2.12.0 |
-| Text-to-speech | Converts text to natural-sounding speech with plain text input or Speech Synthesis Markup Language (SSML). | 1.14.0 |
-| Speech Language Identification | Detect the language spoken in audio files. | 1.0 |
-| Neural Text-to-speech | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | 1.6.0 |
+| Container | Features | Latest | Release status |
+|--|--|--|--|
+| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 2.12.0 | Generally Available |
+| Custom Speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 2.12.0 | Generally Available |
+| Text-to-speech | Converts text to natural-sounding speech with plain text input or Speech Synthesis Markup Language (SSML). | 1.14.0 | Generally Available |
+| Speech Language Identification | Detect the language spoken in audio files. | 1.0 | Gated preview |
+| Neural Text-to-speech | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | 1.6.0 | Generally Available |
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin. ## Prerequisites
-The following prerequisites before using Speech containers:
+You must meet the following prerequisites before using Speech service containers. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-| Required | Purpose |
-|--|--|
-| Docker Engine | You need the Docker Engine installed on a [host computer](#the-host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).<br><br> Docker must be configured to allow the containers to connect with and send billing data to Azure. <br><br> **On Windows**, Docker must also be configured to support Linux containers.<br><br> |
-| Familiarity with Docker | You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` commands. |
-| Speech resource | In order to use these containers, you must have:<br><br>An Azure _Speech_ resource to get the associated API key and endpoint URI. Both values are available on the Azure portal's **Speech** Overview and Keys pages. They are both required to start the container.<br><br>**{API_KEY}**: One of the two available resource keys on the **Keys** page<br><br>**{ENDPOINT_URI}**: The endpoint as provided on the **Overview** page |
+* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
+ * On Windows, Docker must also be configured to support Linux containers.
+ * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
+* A <a href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech service resource" target="_blank">Speech service resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
[!INCLUDE [Gathering required parameters](../containers/includes/container-gathering-required-parameters.md)]
-## The host computer
--
-### Advanced Vector Extension support
-The **host** is the computer that runs the docker container. The host *must support* [Advanced Vector Extensions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) (AVX2). You can check for AVX2 support on Linux hosts with the following command:
+## Host computer requirements and recommendations
-```console
-grep -q avx2 /proc/cpuinfo && echo AVX2 supported || echo No AVX2 support detected
-```
-> [!WARNING]
-> The host computer is *required* to support AVX2. The container *will not* function correctly without AVX2 support.
### Container requirements and recommendations
Core and memory correspond to the `--cpus` and `--memory` settings, which are us
> [!NOTE] > The minimum and recommended are based off of Docker limits, *not* the host machine resources. For example, speech-to-text containers memory map portions of a large language model, and it is *recommended* that the entire file fits in memory, which is an additional 4-6 GB. Also, the first run of either container may take longer, since models are being paged into memory.
+### Advanced Vector Extension support
+
+The **host** is the computer that runs the docker container. The host *must support* [Advanced Vector Extensions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2) (AVX2). You can check for AVX2 support on Linux hosts with the following command:
+
+```console
+grep -q avx2 /proc/cpuinfo && echo AVX2 supported || echo No AVX2 support detected
+```
+> [!WARNING]
+> The host computer is *required* to support AVX2. The container *will not* function correctly without AVX2 support.
+ ## Request approval to the run the container Fill out and submit the [request form](https://aka.ms/csgate) to request access to the container.
docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/language-d
## How to use the container
-Once the container is on the [host computer](#the-host-computer), use the following process to work with the container.
+Once the container is on the [host computer](#host-computer-requirements-and-recommendations), use the following process to work with the container.
1. [Run the container](#run-the-container-with-docker-run), with the required billing settings. More [examples](speech-container-configuration.md#example-docker-run-commands) of the `docker run` command are available. 1. [Query the container's prediction endpoint](#query-the-containers-prediction-endpoint).
cognitive-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/authentication.md
Previously updated : 11/22/2019 Last updated : 07/22/2021
The following video demonstrates using a Cognitive Services key.
## Authenticate with a multi-service subscription key
->[!WARNING]
-> At this time, these services **don't** support multi-service keys: QnA Maker, Speech Services, Custom Vision, and Anomaly Detector.
+> [!WARNING]
+> At this time, the multi-service key doesn't support: QnA Maker, Immersive Reader, Personalizer, and Anomaly Detector.
This option also uses a subscription key to authenticate requests. The main difference is that a subscription key is not tied to a specific service, rather, a single key can be used to authenticate requests for multiple Cognitive Services. See [Cognitive Services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/) for information about regional availability, supported features, and pricing.
Multi-service authentication is supported in these regions:
- `westeurope` - `westus` - `westus2`
+- `francecentral`
+- `koreacentral`
+- `northcentralus`
+- `southafricanorth`
+- `uaenorth`
+- `switzerlandnorth`
+ ### Sample requests
curl -X POST 'https://api.cognitive.microsofttranslator.com/translate?api-versio
* [What is Cognitive Services?](./what-are-cognitive-services.md) * [Cognitive Services pricing](https://azure.microsoft.com/pricing/details/cognitive-services/)
-* [Custom subdomains](cognitive-services-custom-subdomains.md)
+* [Custom subdomains](cognitive-services-custom-subdomains.md)
cognitive-services Text Analytics How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/how-tos/text-analytics-how-to-install-containers.md
Previously updated : 06/02/2021 Last updated : 07/21/2021 keywords: on-premises, Docker, container, sentiment analysis, natural language processing # Install and run Text Analytics containers
+Containers enable you to run the Text Analytic APIs in your own environment and are great for your specific security and data governance requirements. The following Text Analytics containers are available:
+
+* sentiment analysis
+* language detection
+* key phrase extraction (preview)
+* Text Analytics for health
+ > [!NOTE]
-> * The container for Sentiment Analysis and language detection are now Generally Available. The key phrase extraction container is available as an ungated public preview.
> * Entity linking and NER are not currently available as a container. > * The container image locations may have recently changed. Read this article to see the updated location for this container.
+> * The free account is limited to 5,000 text records per month and only the **Free** and **Standard** [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics) are valid for containers. For more information on transaction request rates, see [Data Limits](../concepts/data-limits.md).
-Containers enable you to run the Text Analytic APIs in your own environment and are great for your specific security and data governance requirements. The Text Analytics containers provide advanced natural language processing over raw text, and include three main functions: sentiment analysis, key phrase extraction, and language detection.
+Containers enable you to run the Text Analytic APIs in your own environment and are great for your specific security and data governance requirements. The Text Analytics containers provide advanced natural language processing over raw text, and include three main functions: sentiment analysis, key phrase extraction, and language detection.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-> [!IMPORTANT]
-> The free account is limited to 5,000 text records per month and only the **Free** and **Standard** <a href="https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics" target="_blank">pricing tiers </a> are valid for containers. For more information on transaction request rates, see [Data Limits](../concepts/data-limits.md).
- ## Prerequisites
-To run any of the Text Analytics containers, you must have the host computer and container environments.
+You must meet the following prerequisites before using Text Analytics containers. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/cognitive-services/) before you begin.
-## Preparation
-
-You must meet the following prerequisites before using Text Analytics containers:
-
-|Required|Purpose|
-|--|--|
-|Docker Engine| You need the Docker Engine installed on a [host computer](#the-host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).<br><br> Docker must be configured to allow the containers to connect with and send billing data to Azure. <br><br> **On Windows**, Docker must also be configured to support Linux containers.<br><br>|
-|Familiarity with Docker | You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` commands.|
-|Text Analytics resource |In order to use the container, you must have:<br><br>An Azure [Text Analytics resource](../../cognitive-services-apis-create-account.md) with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/). You will need to get the associated API key and endpoint URI by navigating to your resource's **Key and endpoint** page in the Azure portal. <br><br>**{API_KEY}**: One of the two available resource keys. <br><br>**{ENDPOINT_URI}**: The endpoint for your resource. |
+* [Docker](https://docs.docker.com/) installed on a host computer. Docker must be configured to allow the containers to connect with and send billing data to Azure.
+ * On Windows, Docker must also be configured to support Linux containers.
+ * You should have a basic understanding of [Docker concepts](https://docs.docker.com/get-started/overview/).
+* A <a href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Text Analytics resource" target="_blank">Text Analytics resource </a> with the free (F0) or standard (S) [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
[!INCLUDE [Gathering required parameters](../../containers/includes/container-gathering-required-parameters.md)]
-If you're using the Text Analytics for health container, the [responsible AI](/legal/cognitive-services/text-analytics/transparency-note-health) (RAI) acknowledgment must also be present with a value of `accept`.
-
-## The host computer
+## Host computer requirements and recommendations
[!INCLUDE [Host Computer requirements](../../../../includes/cognitive-services-containers-host-computer.md)]
-### Container requirements and recommendations
-
-The following table describes the minimum and recommended specifications for the Text Analytics containers. At least 2 gigabytes (GB) of memory are required, and each CPU core must be at least 2.6 gigahertz (GHz) or faster. The allowable Transactions Per Section (TPS) are also listed.
+The following table describes the minimum and recommended specifications for the available Text Analytics containers. Each CPU core must be at least 2.6 gigahertz (GHz) or faster. The allowable Transactions Per Second (TPS) are also listed.
| | Minimum host specs | Recommended host specs | Minimum TPS | Maximum TPS| |||-|--|--|
-| **Language detection, key phrase extraction** | 1 core, 2GB memory | 1 core, 4GB memory |15 | 30|
+| **Language detection** | 1 core, 2GB memory | 1 core, 4GB memory |15 | 30|
+| **key phrase extraction (preview)** | 1 core, 2GB memory | 1 core, 4GB memory |15 | 30|
| **Sentiment Analysis** | 1 core, 2GB memory | 4 cores, 8GB memory |15 | 30| | **Text Analytics for health - 1 document/request** | 4 core, 10GB memory | 6 core, 12GB memory |15 | 30| | **Text Analytics for health - 10 documents/request** | 6 core, 16GB memory | 8 core, 20GB memory |15 | 30|
CPU core and memory correspond to the `--cpus` and `--memory` settings, which ar
## Get the container image with `docker pull` -
-Container images for Text Analytics are available on the Microsoft Container Registry.
- # [Sentiment Analysis ](#tab/sentiment) [!INCLUDE [docker-pull-sentiment-analysis-container](../includes/docker-pull-sentiment-analysis-container.md)]
Container images for Text Analytics are available on the Microsoft Container Reg
***
-## How to use the container
-
-Once the container is on the [host computer](#the-host-computer), use the following process to work with the container.
-
-1. [Run the container](#run-the-container-with-docker-run), with the required billing settings.
-1. [Query the container's prediction endpoint](#query-the-containers-prediction-endpoint).
## Run the container with `docker run`
-Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the containers. The container will continue to run until you stop it.
+Once the container is on the host computer, use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to run the containers. The container will continue to run until you stop it.
> [!IMPORTANT] > * The docker commands in the following sections use the back slash, `\`, as a line continuation character. Replace or remove this based on your host operating system's requirements. > * The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container; otherwise, the container won't start. For more information, see [Billing](#billing).
-> * The sentiment analysis and language detection containers are generally available. The key phrase extraction container uses v2 of the API, and is in preview.
+> * If you're using the Text Analytics for health container, the [responsible AI](/legal/cognitive-services/text-analytics/transparency-note-health) (RAI) acknowledgment must also be present with a value of `accept`.
+> * The sentiment analysis and language detection containers use v3 of the API, and are generally available. The key phrase extraction container uses v2 of the API, and is in preview.
# [Sentiment Analysis](#tab/sentiment)
In this article, you learned concepts and workflow for downloading, installing,
* *Key Phrase Extraction (preview)* * *Language Detection* * *Text Analytics for health*
-* Container images are downloaded from the Microsoft Container Registry (MCR)
+* Container images are downloaded from the Microsoft Container Registry (MCR).
* Container images run in Docker. * You can use either the REST API or SDK to call operations in Text Analytics containers by specifying the host URI of the container. * You must specify billing information when instantiating a container.
In this article, you learned concepts and workflow for downloading, installing,
## Next steps
-* Review [Configure containers](../text-analytics-resource-container-config.md) for configuration settings
-* Refer to [Frequently asked questions (FAQ)](../text-analytics-resource-faq.yml) to resolve issues related to functionality.
+* See [Configure containers](../text-analytics-resource-container-config.md) for configuration settings.
cognitive-services Text Analytics Resource Container Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/text-analytics-resource-container-config.md
Previously updated : 04/01/2020 Last updated : 07/21/2021
Use bind mounts to read and write data to and from the container. You can specif
The Text Analytics containers don't use input or output mounts to store training or service data.
-The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](how-tos/text-analytics-how-to-install-containers.md#the-host-computer)'s mount location may not be accessible due to a conflict between permissions used by the docker service account and the host mount location permissions.
+The exact syntax of the host mount location varies depending on the host operating system. Additionally, the [host computer](how-tos/text-analytics-how-to-install-containers.md#host-computer-requirements-and-recommendations)'s mount location may not be accessible due to a conflict between permissions used by the docker service account and the host mount location permissions.
|Optional| Name | Data type | Description | |-||--|-|
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/teams-interop.md
Azure Communication Services supports two types of Teams interoperability depend
Applications can implement both authentication schemes and leave the choice of authentication up to the end user. ## Bring your own identity
-Bring your own identity (BYOI) is the most common and simplest model for using Azure Communication Services and Teams interoperability. You implement whatever authentication scheme you desire, your app can join Microsoft Teams meetings, and Teams will treat these users as anonymous external accounts.
+Bring your own identity (BYOI) is the common model for using Azure Communication Services and Teams interoperability. It supports any identity provider and authentication scheme. Your app can join Microsoft Teams meetings, and Teams will treat these users as anonymous external accounts. The name of Communication Services users displayed in Teams is configurable via the Communication Services Calling SDK.
-This capability is ideal for business-to-consumer applications that bring together employees (familiar with Teams) and external users (using a custom application experience) into a meeting experience. For example:
+This capability is ideal for business-to-consumer applications that bring together employees (familiar with Teams) and external users (using a custom application experience) into a meeting experience. Meeting details that need to be shared with external users of your application can be retrieved via the Graph API or from the calendar in Microsoft Teams.
-1. Employees use Teams to schedule a meeting
-1. Meeting details are shared with external users through your custom application.
- * **Using Graph API** - Your custom application uses the Microsoft Graph APIs to access meeting details to be shared.
- * **Manual options** - For example, your meeting link can be copied from your calendar in Microsoft Teams.
-1. External users use your custom application to join the Teams meeting (via the Communication Services Calling and Chat SDKs)
-
-While certain Teams meeting features such as raised hand, together mode, and breakout rooms will only be available for Teams users, your custom application will have access to the meeting's core audio, video, chat, and screen sharing capabilities. Meeting chat will be accessible to your custom application user while they're in the call. They won't be able to send or receive messages before joining or after leaving the call. If the meeting is scheduled for a channel, Communication Services users will not be able to join the chat or send and receive messages.
-
-When a Communication Services user joins the Teams meeting, the display name provided through the Calling SDK will be shown to Teams users. The Communication Services user will otherwise be treated like an anonymous user in Teams.
+External users will be able to use core audio, video, screen sharing, and chat functionality via Azure Communication Services SDKs. Features such as raised hand, together mode, and breakout rooms will only be available for Teams users. Communication Services users can send and receive messages only while present in the Teams meeting and if the meeting is not scheduled for a channel.
Your custom application should consider user authentication and other security measures to protect Teams meetings. Be mindful of the security implications of enabling anonymous users to join meetings, and use the [Teams security guide](/microsoftteams/teams-security-guide#addressing-threats-to-teams-meetings) to configure capabilities available to anonymous users.
-Additional information on required dataflows for joining Teams meetings is available at the [client and server architecture page](client-and-server-architecture.md). The [Group Calling Hero Sample](../samples/calling-hero-sample.md) provides example code for joining a Teams meeting from a Web application.
+Additional information on required dataflows for joining Teams meetings is available at the [client and server architecture page](client-and-server-architecture.md). The [Group Calling Hero Sample](../samples/calling-hero-sample.md) provides example code for joining a Teams meeting from a web application.
## Microsoft 365 Teams identity
-Authenticating the end user's Microsoft 365 account and authorizing your application through Azure Active Directory allows for a deeper level of interoperability with Microsoft Teams. These applications can make calls and join meetings seamlessly on behalf of Microsoft 365 users. When interacting in a meeting or call, users of the native Teams app will observe your application's end users having the appropriate display name, profile picture, call history, and other Microsoft 365 attributes. Chat functionality is currently available via Graph API.
+The Azure Communication Services Calling SDK can be used with Microsoft 365 Teams identities to support Teams-like experiences for Teams interoperability. Microsoft 365 Teams identities are provided and authenticated by Azure Active Directory. Your app can make or accept calls with a regular Microsoft 365 identity. All attributes and details about the user are bound to the Azure Active Directory user.
+
+This identity model is ideal for use cases where a custom user interface is needed, where the Teams client is not available for your platform, or where the Teams client does not support a sufficient level of customization. For example, an application can be used to answer phone calls on behalf of the end user's Teams provisioned PSTN number and have a user interface optimized for a receptionist or call center business process.
+
+Calling and screen sharing functionality is available via the Communication Services Calling SDK. Calling management is available via Graph API, configuration in the Teams client or Teams Admin Portal. Chat functionality is available via Graph API.
-This identity model is ideal for augmenting a Teams deployment with a fully custom user experience. For example, an application can be used to answer phone calls on behalf of the end user's Teams provisioned PSTN number and have a user interface optimized for a receptionist or call center business process.
+Teams users are authenticated via the MSAL library against Azure Active Directory in the client application. Authentication tokens are exchanged for Microsoft 365 Teams token via the Communication Services Identity SDK. You are encouraged to implement an exchange of tokens in your backend services as exchange requests are signed by credentials for Azure Communication Services. In your backend services, you can require any additional authentication.
-Building an Azure Communication Services app using Microsoft 365 identities requires:
-1. Azure Communication Services resource in Azure
-2. Azure Active Directory application
-3. Application authorization from the end-user or an admin in Azure Active Directory
-4. Authentication of the end user's Microsoft 365 identity
+To learn more about the functionality, join our TAP program for early access by completing [this form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR8MfnD7fOYZEompFbYDoD4JUMkdYT0xKUUJLR001ODdQRk1ITTdOMlRZNSQlQCN0PWcu).
-Authentication and authorization of the end-users are performed through [Microsoft Authentication Library flows (MSAL)](../../active-directory/develop/msal-overview.md). The following diagram summarizes integrating your calling experiences with authenticated Teams interoperability:
+## Comparison
-![Process to enable calling feature for custom Teams endpoint experience](./media/teams-identities/teams-identity-calling-overview.png)
+|Criteria|Bring your own identity| Microsoft 365 Teams identity|
+||||
+|Applicable| In business-to-consumer scenarios for consumer applications | In business-to-business or business-to-consumer scenarios on business applications |
+|Identity provider|Any|Azure Active Directory|
+|Authentication & authorization|Custom*| Azure Active Directory and custom*|
+|Calling available via | Communication Services Calling SDKs | Communication Services Calling SDKs |
+|Chat available via | Communication Services Chat SDKs | Graph API |
+|PSTN support| outbound voice call, outbound direct routing, [details](./telephony-sms/telephony-concept.md) | inbound call assigned to Teams identity, outbound call using calling plan|
+\* Server logic issuing access tokens can perform any custom authentication and authorization of the request.
## Privacy Interoperability between Azure Communication Services and Microsoft Teams enables your applications and users to participate in Teams calls, meetings, and chat. It is your responsibility to ensure that the users of your application are notified when recording or transcription are enabled in a Teams call or meeting.
Azure Communication Services interoperability isn't compatible with Teams deploy
> [!div class="nextstepaction"] > [Join a BYOI calling app to a Teams meeting](../quickstarts/voice-video-calling/get-started-teams-interop.md)
-> [Authenticate Microsoft 365 users](../quickstarts/manage-teams-identity.md)
+> [Authenticate Microsoft 365 users](../quickstarts/manage-teams-identity.md)
communication-services Meeting Interop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/chat/meeting-interop.md
Last updated 06/30/2021
-zone_pivot_groups: acs-web-ios
+zone_pivot_groups: acs-web-ios-android
Get started with Azure Communication Services by connecting your chat solution t
[!INCLUDE [Teams interop with iOS SDK](./includes/meeting-interop-swift.md)] ::: zone-end + ## Clean up resources If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
cosmos-db Configure Periodic Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/configure-periodic-backup-restore.md
description: This article describes how to configure Azure Cosmos DB accounts wi
Previously updated : 04/05/2021 Last updated : 07/21/2021
Azure Cosmos DB automatically takes backups of your data at regular intervals. T
* The backups are taken without affecting the performance or availability of your application. Azure Cosmos DB performs data backup in the background without consuming any extra provisioned throughput (RUs) or affecting the performance and availability of your database. > [!Note]
-> Accounts with Synapse Link enabled are not supported.
+> For Azure Synapse Link enabled accounts, analytical store data isn't included in the backups and restores. When Synapse Link is enabled, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at a scheduled backup interval. Automatic backup and restore of your data in the analytical store is not supported at this time.
## <a id="backup-storage-redundancy"></a>Backup storage redundancy
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/continuous-backup-restore-introduction.md
Currently the point in time restore functionality is in public preview and it ha
* Multi-regions write accounts are not supported.
-* Accounts with Synapse Link enabled are not supported.
+* For Azure Synapse Link enabled accounts, analytical store data isn't included in the backups and restores. When Synapse Link is enabled, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at a scheduled backup interval. Automatic backup and restore of your data in the analytical store is not supported at this time.
* The restored account is created in the same region where your source account exists. You can't restore an account into a region where the source account did not exist.
cosmos-db Online Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/online-backup-and-restore.md
description: This article describes how automatic backup, on-demand data restore
Previously updated : 10/13/2020 Last updated : 07/21/2021
Azure Cosmos DB automatically takes backups of your data at regular intervals. T
> [!NOTE] > If you configure a new account with continuous backup, you can do self-service restore via Azure portal, PowerShell, or CLI. If your account is configured in continuous mode, you canΓÇÖt switch it back to periodic mode. Currently existing accounts with periodic backup mode canΓÇÖt be changed into continuous mode.
+For Azure Synapse Link enabled accounts, analytical store data isn't included in the backups and restores. When Synapse Link is enabled, Azure Cosmos DB will continue to automatically take backups of your data in the transactional store at a scheduled backup interval. Automatic backup and restore of your data in the analytical store is not supported at this time.
+ ## Next steps Next you can learn about how to configure and manage periodic and continuous backup modes for your account:
databox-online Azure Stack Edge Gpu Troubleshoot Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox-online/azure-stack-edge-gpu-troubleshoot-blob-storage.md
Here are the errors related to Blob storage for an Azure Stack Edge device.
|The value for one of the HTTP headers is not in the correct format.|The installed version of the Microsoft Azure Storage Library for Python is not supported by Azure Stack Edge. For supported library versions, see [Supported Azure client libraries](azure-stack-edge-gpu-system-requirements-rest.md#supported-azure-client-libraries).| |… [SSL: CERTIFICATE_VERIFY_FAILED] …| Before running Python, set the REQUESTS_CA_BUNDLE environment variable to the path of the Base64-encoded SSL certificate file (see how to [Download the certificate](azure-stack-edge-gpu-deploy-configure-certificates.md#generate-device-certificates)). For example, run:<br>`export REQUESTS_CA_BUNDLE=/tmp/mycert.cer`<br>`python`<br>Alternately, add the certificate to the system's certificate store, and then set this environment variable to the path of that store. For example, on Ubuntu, run:<br>`export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt`<br>`python`| |The connection times out.|Sign in on your device, and then check whether it's unlocked. Anytime the device restarts, it stays locked until someone signs in.|-
+|Could not create or update storageaccount. Ensure that the access key for your storage account is valid. If needed, update the key on the device.|Sync the storage account keys. Follow the steps outlined [here](azure-stack-edge-gpu-manage-storage-accounts.md#sync-storage-keys).|
## Next steps
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/alert-engine-messages.md
Policy engine alerts describe detected deviations from learned baseline behavior
| New Activity Detected - Using Yokogawa VNetIP Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major | | New Asset Detected | A new source device was detected on the network but has not been authorized. | Major | | New LLDP Device Configuration | A new source device was detected on the network but has not been authorized. | Major |
-| New Port Discovery | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Warning |
| Omron FINS Unauthorized Command | New traffic parameters were detected. This parameter combination has not been authorized as learned traffic on your network. The following combination is unauthorized. | Major | | S7 Plus PLC Firmware Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | | Sampled Values Message Type Settings | Message (identified by protocol ID) settings were changed on a source device. | Warning |
event-grid Async Operations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/async-operations.md
Title: Status of Event Grid asynchronous operations description: Describes how to track Event Grid asynchronous operations in Azure. It shows the values you use to get the status of a long-running operation. Previously updated : 07/07/2020 Last updated : 07/2/2021
event-grid Auth0 How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/auth0-how-to.md
Title: How to send events from Auth0 to Azure using Azure Event Grid description: How to end events from Auth0 to Azure services with Azure Event Grid. Previously updated : 07/07/2020 Last updated : 07/22/2021 # Integrate Azure Event Grid with Auth0
event-grid Auth0 Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/auth0-overview.md
Title: Auth0 partner topics with Azure Event Grid description: Send events from Auth0 to Azure services with Azure Event Grid. Previously updated : 07/07/2020 Last updated : 07/22/2021 # Auth0 partner topics
event-grid Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/cli-samples.md
Title: Azure CLI samples - Event Grid | Microsoft Docs description: This article provides a table that includes links to Azure command-line interface (CLI) script samples for Event Grid. Previously updated : 07/07/2020 Last updated : 07/22/2021
event-grid Cloud Event Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/cloud-event-schema.md
Title: CloudEvents v1.0 schema with Azure Event Grid description: Describes how to use the CloudEvents v1.0 schema for events in Azure Event Grid. The service supports events in the JSON implementation of Cloud Events. Previously updated : 07/07/2020 Last updated : 07/22/2021 # CloudEvents v1.0 schema with Azure Event Grid
event-grid Cloudevents Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/cloudevents-schema.md
Title: Use Azure Event Grid with events in CloudEvents schema description: Describes how to use the CloudEvents schema for events in Azure Event Grid. The service supports events in the JSON implementation of CloudEvents. Previously updated : 11/10/2020 Last updated : 07/22/2021
event-grid Compare Messaging Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/compare-messaging-services.md
Title: Compare Azure messaging services description: Describes the three Azure messaging services - Azure Event Grid, Event Hubs, and Service Bus. Recommends which service to use for different scenarios. Previously updated : 07/01/2021 Last updated : 07/22/2021 # Choose between Azure messaging services - Event Grid, Event Hubs, and Service Bus
event-grid Configure Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/configure-private-endpoints.md
Title: Configure private endpoints for Azure Event Grid topics or domains description: This article describes how to configure private endpoints for Azure Event Grid topics or domain. Previously updated : 11/18/2020 Last updated : 07/22/2021
event-grid Create View Manage System Topics Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/create-view-manage-system-topics-arm.md
Title: Use Azure Resource Manager templates to create system topics in Azure Event Grid description: This article shows how to use Azure Resource Manager templates to create system topics in Azure Event Grid. Previously updated : 07/07/2020 Last updated : 07/22/2021 # Create system topics in Azure Event Grid using Resource Manager templates
event-grid Create View Manage System Topics Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/create-view-manage-system-topics-cli.md
Title: Create, view, and manage Azure Event Grid system topics using CLI description: This article shows how to use Azure CLI to create, view, and delete system topics. Previously updated : 07/07/2020 Last updated : 07/22/2021 # Create, view, and manage Event Grid system topics using Azure CLI
event-grid Event Grid Cli Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/scripts/event-grid-cli-azure-subscription.md
Title: Azure CLI script sample - Subscribe to Azure subscription | Microsoft Doc
description: This article provides a sample Azure CLI script that shows how to subscribe to Azure Event Grid events using Azure CLI. ms.devlang: azurecli Previously updated : 07/08/2020 Last updated : 07/22/2021
event-grid Event Grid Cli Blob https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/scripts/event-grid-cli-blob.md
Title: Azure CLI script sample - Subscribe to Blob storage account | Microsoft D
description: This article provides a sample Azure CLI script that shows how to subscribe to events for a Azure Blob Storage account. ms.devlang: azurecli Previously updated : 07/08/2020 Last updated : 07/22/2021
event-grid Event Grid Cli Create Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/scripts/event-grid-cli-create-custom-topic.md
Title: Azure CLI script sample - Create custom topic | Microsoft Docs
description: This article provides a sample Azure CLI script that shows how to create an Azure Event Grid custom topic. ms.devlang: azurecli Previously updated : 07/08/2020 Last updated : 07/22/2021
event-grid Event Grid Cli Resource Group Filter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/scripts/event-grid-cli-resource-group-filter.md
Title: Azure CLI - subscribe to resource group & filter by resource
description: This article provides a sample Azure CLI script that shows how to subscribe to Event Grid events for a resource and filter for a resource. ms.devlang: azurecli Previously updated : 07/08/2020 Last updated : 07/22/2021
event-grid Event Grid Cli Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/scripts/event-grid-cli-resource-group.md
Title: Azure CLI script sample - Subscribe to resource group | Microsoft Docs
description: This article provides a sample Azure CLI script that shows how to subscribe to Azure Event Grid events for a resource group. ms.devlang: azurecli Previously updated : 07/08/2020 Last updated : 07/22/2021
event-grid Event Grid Cli Subscribe Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/scripts/event-grid-cli-subscribe-custom-topic.md
Title: Azure CLI script sample - Subscribe to custom topic | Microsoft Docs
description: This article provides a sample Azure CLI script that shows how to subscribe to Event Grid events for a custom topic. ms.devlang: azurecli Previously updated : 07/08/2020 Last updated : 07/22/2021
event-grid Event Grid Powershell Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/scripts/event-grid-powershell-azure-subscription.md
Title: Azure PowerShell - subscribe to Azure subscription
description: This article provides a sample Azure PowerShell script that shows how to subscribe to Event Grid events for an Azure subscription. ms.devlang: powershell Previously updated : 07/08/2020 Last updated : 07/22/2021 # Subscribe to events for an Azure subscription with PowerShell
event-grid Event Grid Powershell Blob https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/scripts/event-grid-powershell-blob.md
Title: Azure PowerShell - subscribe to Blob storage account
description: This article provides a sample Azure PowerShell script that shows how to subscribe to Event Grid events for a Blob Storage account. ms.devlang: powershell Previously updated : 07/08/2020 Last updated : 07/22/2021 # Subscribe to events for a Blob storage account with PowerShell
event-grid Event Grid Powershell Create Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/scripts/event-grid-powershell-create-custom-topic.md
Title: Azure PowerShell script sample - Create custom topic | Microsoft Docs
description: This article provides a sample Azure PowerShell script that shows how to create an Event Grid custom topic. documentationcenter: na- - ms.devlang: powershell na Previously updated : 01/23/2020- Last updated : 07/22/2021 # Create Event Grid custom topic with PowerShell
iot-central Concepts Device Templates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/concepts-device-templates.md
A solution builder adds device templates to an IoT Central application. A device
A device template includes the following sections: - _A device model_. This part of the device template defines how the device interacts with your application. A device developer implements the behaviors defined in the model.
- - _Default component_. Every device model has a default component. The default component's interface describes capabilities that are specific to the device model.
- - _Components_. A device model may include components in addition to the default component to describe device capabilities. Each component has an interface that describes the component's capabilities. Component interfaces may be reused in other device models. For example several phone device models could use the same camera interface.
- - _Inherited interfaces_. A device model contains one or more interfaces that extend the capabilities of the default component.
+ - _Root component_. Every device model has a root component. The root component's interface describes capabilities that are specific to the device model.
+ - _Components_. A device model may include components in addition to the root component to describe device capabilities. Each component has an interface that describes the component's capabilities. Component interfaces may be reused in other device models. For example several phone device models could use the same camera interface.
+ - _Inherited interfaces_. A device model contains one or more interfaces that extend the capabilities of the root component.
- _Cloud properties_. This part of the device template lets the solution developer specify any device metadata to store. Cloud properties are never synchronized with devices and only exist in the application. Cloud properties don't affect the code that a device developer writes to implement the device model. - _Customizations_. This part of the device template lets the solution developer override some of the definitions in the device model. Customizations are useful if the solution developer wants to refine how the application handles a value, such as changing the display name for a property or the color used to display a telemetry value. Customizations don't affect the code that a device developer writes to implement the device model. - _Views_. This part of the device template lets the solution developer define visualizations to view data from the device, and forms to manage and control a device. The views use the device model, cloud properties, and customizations. Views don't affect the code that a device developer writes to implement the device model.
A typical IoT device is made up of:
These parts are called _interfaces_ in a device model. Interfaces define the details of each part your device implements. Interfaces are reusable across device models. In DTDL, a component refers to another interface, which may defined in a separate DTDL file or in a separate section of the file.
-The following example shows the outline of device model for a [temperature controller device](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/temperaturecontroller-2.json). The default component includes definitions for `workingSet`, `serialNumber`, and `reboot`. The device model also includes two `thermostat` components and a `deviceInformation` component. The contents of the three components have been removed for the sake of brevity:
+The following example shows the outline of device model for a [temperature controller device](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/temperaturecontroller-2.json). The root component includes definitions for `workingSet`, `serialNumber`, and `reboot`. The device model also includes two `thermostat` components and a `deviceInformation` component. The contents of the three components have been removed for the sake of brevity:
```json [
iot-central How To Connect Iot Edge Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/how-to-connect-iot-edge-transparent-gateway.md
The following screenshot shows the **Relationships** page for an IoT Edge gatewa
:::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/device-template-relationship.png" alt-text="Screenshot showing IoT Edge gateway device template relationship with a thermostat downstream device template.":::
-The previous screenshot shows an IoT Edge gateway device template with no modules defined. A transparent gateway doesn't require any modules because the IoT Edge runtime forwards messages from the downstream devices to IoT Central. If the gateway itself needs to send telemetry, synchronize properties, or handle commands, you can define these capabilities in the default component or in a module.
+The previous screenshot shows an IoT Edge gateway device template with no modules defined. A transparent gateway doesn't require any modules because the IoT Edge runtime forwards messages from the downstream devices to IoT Central. If the gateway itself needs to send telemetry, synchronize properties, or handle commands, you can define these capabilities in the root component or in a module.
Add any required cloud properties and views before you publish the gateway and downstream device templates.
iot-central Howto Control Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-control-devices-with-rest-api.md
For the reference documentation for the IoT Central REST API, see [Azure IoT Cen
Components let you group and reuse device capabilities. To learn more about components and device models, see the [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md).
-Not all device templates use components. The following screenshot shows the device template for a simple [thermostat](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/thermostat-2.json) where all the capabilities are defined in a single interface called the **Default component**:
+Not all device templates use components. The following screenshot shows the device template for a simple [thermostat](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/thermostat-2.json) where all the capabilities are defined in a single interface called the **Root component**:
:::image type="content" source="media/howto-control-devices-with-rest-api/thermostat-device.png" alt-text="Screenshot that shows a simple no component thermostat device.":::
iot-central Howto Export Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-export-data.md
Each exported message contains a normalized form of the full message the device
- `enqueuedTime`: The time at which this message was received by IoT Central. - `enrichments`: Any enrichments set up on the export. - `module`: The IoT Edge module that sent this message. This field only appears if the message came from an IoT Edge module.-- `component`: The component that sent this message. This field only appears if the capabilities sent in the message were modeled as a [component in the device template](howto-set-up-template.md#create-a-component).
+- `component`: The component that sent this message. This field only appears if the capabilities sent in the message were modeled as a component in the device template
- `messageProperties`: Additional properties that the device sent with the message. These properties are sometimes referred to as *application properties*. [Learn more from IoT Hub docs](../../iot-hub/iot-hub-devguide-messages-construct.md). For Event Hubs and Service Bus, IoT Central exports a new message quickly after it receives the message from a device. In the user properties (also referred to as application properties) of each message, the `iotcentral-device-id`, `iotcentral-application-id`, and `iotcentral-message-source` are included automatically.
iot-central Howto Manage Devices In Bulk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-devices-in-bulk.md
To connect large number of devices to your application, you can bulk import devi
| Column | Description | | - | - |
-| IOTC_DEVICEID | The device ID is a unique identified this device will use to connect. The device ID can contain letters, numbers, and the `-` character without any spaces. |
-| IOTC_DEVICENAME | Optional. The device name is a friendly name that will be displayed throughout the application. If not specified, the same as the device ID. |
+| IOTC_DEVICEID | The device ID is a unique identified this device will use to connect. The device ID can contain letters, numbers, and the `-` character without any spaces. The maximum length is 128 characters. |
+| IOTC_DEVICENAME | Optional. The device name is a friendly name that will be displayed throughout the application. If not specified, the same as the device ID. The maximum length is 148 characters. |
To bulk-register devices in your application:
To bulk export devices from your application:
* IOTC_X509THUMBPRINT_PRIMARY * IOTC_X509THUMBPRINT_SECONDARY
-For more information about connection strings and connecting real devices to your IoT Central application, see [Device connectivity in Azure IoT Central](concepts-get-connected.md).
+For more information about connecting real devices to your IoT Central application, see [Device connectivity in Azure IoT Central](concepts-get-connected.md).
## Next steps
iot-central Howto Manage Devices Individually https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-devices-individually.md
To add a device to your Azure IoT Central application:
1. Choose + **New**.
+1. Enter a device name and ID or accept the default. The maximum length of a device name is 148 characters. The maximum length of a device ID is 128 characters.
+ 1. Turn the **Simulated** toggle to **On** or **Off**. A real device is for a physical device that you connect to your Azure IoT Central application. A simulated device has sample data generated for you by Azure IoT Central. 1. Select **Create**.
iot-central Howto Set Up Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-set-up-template.md
A device template is a blueprint that defines the characteristics and behaviors of a type of device that connects to an [Azure IoT Central application](concepts-app-templates.md).
-For example, a builder can create a device template for a connected fan that has the following characteristics:
+This article describes how to create a device template in IoT Central. For example, you can create a device template for a sensor that sends telemetry, such as temperature and humidity, and properties, such as location. From this device template, an operator can create and connect real devices.
-- Sends temperature telemetry-- Sends location property-- Sends fan motor error events-- Sends fan operating state-- Provides a writable fan speed property-- Provides a command to restart the device-- Gives you an overall view of the device using a view
+The following screenshot shows an example of a device template:
-From this device template, an operator can create and connect real fan devices. All these fans have measurements, properties, and commands that operators use to monitor and manage them. Operators use the [device views](#add-views) and forms to interact with the fan devices. A device developer uses the template to understand how the device interacts with the application. To learn more, see [Telemetry, property, and command payloads](concepts-telemetry-properties-commands.md).
-> [!NOTE]
-> Only builders and administrators can create, edit, and delete device templates. Any user can create devices on the **Devices** page from existing device templates.
-
-In an IoT Central application, a device template uses a device model to describe the capabilities of a device. As a builder, you have several options for creating device templates:
--- Design the device template in IoT Central, and then [implement its device model in your device code](concepts-telemetry-properties-commands.md).-- Import a device template from the [Azure Certified for IoT device catalog](https://aka.ms/iotdevcat). Customize the device template to your requirements in IoT Central.
-> [!NOTE]
-> IoT Central requires the full model with all the referenced interfaces in the same file, when you import a model from the model repository use the keyword ΓÇ£expandedΓÇ¥ to get the full version.
-For example. https://devicemodels.azure.com/dtmi/com/example/thermostat-1.expanded.json
--- Author a device model using the [Digital Twins Definition Language (DTDL) - version 2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). Visual Studio code has an extension that supports authoring DTDL models. To learn more, see [Lifecycle and tools](../../iot-develop/concepts-modeling-guide.md#lifecycle-and-tools). Then publish the model to the public model repository. To learn more, see [Device model repository](../../iot-develop/concepts-model-repository.md). Implement your device code from the model, and connect your real device to your IoT Central application. IoT Central finds and imports the device model from the public repository for you and generates a device template. You can then add any cloud properties, customizations, and views your IoT Central application needs to the device template.-- Author a device model using the DTDL. Implement your device code from the model. Manually import the device model into your IoT Central application, and then add any cloud properties, customizations, and views your IoT Central application needs.
+The device template has the following sections:
-> [!TIP]
-> IoT Central requires the full model with all the referenced interfaces in the same file. When you import a model from the model repository use the keyword *expanded* to get the full version.
-> For example, [https://devicemodels.azure.com/dtmi/com/example/thermostat-1.expanded.json](https://devicemodels.azure.com/dtmi/com/example/thermostat-1.expanded.json).
+- Model - Use the model to define how your device interacts with your IoT Central application. Each model has a unique model ID and defines the capabilities of the device. Capabilities are grouped into interfaces that let you reuse components across models or use inheritance to extend the set of capabilities.
+- Cloud properties - Use cloud properties to define information that your IoT Central application stores about your devices. For example, a cloud property might record the date a device was last serviced.
+- Customize - Use customizations to modify capabilities. For example, specify the minimum and maximum temperature values for a property.
+- Views - Use views to visualize the data from the device, and forms to manage and control a device.
-You can also add device templates to an IoT Central application using the [REST API](/learn/modules/manage-iot-central-apps-with-rest-api/) or the [CLI](howto-manage-iot-central-from-cli.md).
+To learn more, see [What are device templates?](concepts-device-templates.md).
-Some [application templates](concepts-app-templates.md) already include device templates that are useful in the scenario the application template supports. For example, see [In-store analytics architecture](../retail/store-analytics-architecture.md).
+## Create a device template
-## Create a device template from the device catalog
+You have several options for creating device templates:
-As a builder, you can quickly start building out your solution by using a certified device. See the list in the [Azure IoT Device Catalog](https://devicecatalog.azure.com). IoT Central integrates with the device catalog so you can import a device model from any of the certified devices. To create a device template from one of these devices in IoT Central:
+- Design the device template in the IoT Central GUI.
+- Import a device template from the [Azure Certified for IoT device catalog](https://aka.ms/iotdevcat). Optionally, customize the device template to your requirements in IoT Central.
+- When the device connects to IoT Central, have it send the model ID of the model it implements. IoT Central uses the model ID to retrieve the model from the model repository and to create a device template. Add any cloud properties, customizations, and views your IoT Central application needs to the device template.
+- Author a device model using the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md). Manually import the device model into your IoT Central application, and then add any cloud properties, customizations, and views your IoT Central application needs.
+- You can also add device templates to an IoT Central application using the [REST API](/learn/modules/manage-iot-central-apps-with-rest-api/) or the [CLI](howto-manage-iot-central-from-cli.md).
-1. Go to the **Device templates** page in your IoT Central application.
-1. Select **+ New**, and then select any of the certified devices from the catalog. IoT Central creates a device template based on this device model.
-1. Add any cloud properties, customizations, or views to your device template.
-1. Select **Publish** to make the template available for operators to view and connect devices.
-
-## Create a device template from scratch
-
-A device template contains:
+> [!NOTE]
+> In each case, the device code must implement the capabilities defined in the model. The device code implementation isn't affected by the cloud properties, customizations and views sections of the device template.
-- A _device model_ that specifies the telemetry, properties, and commands that the device implements. These capabilities are organized into one or more components.-- _Cloud properties_ that define information that your IoT Central application stores about your devices. For example, a cloud property might record the date a device was last serviced. This information is never shared with the device.-- _Customizations_ let the builder override some of the definitions in the device model. For example, the builder can override the name of a device property. Property names appear in IoT Central views and forms.-- _Views and forms_ let the builder create a UI that lets operators monitor and manage the devices connected to your application.
+This section shows you how to import a device template from the catalog and how to customize it using the IoT Central GUI. This example uses the **ESP32-Azure IoT Kit** device template from the device catalog:
-To create a device template in IoT Central:
+1. To add a new device template, select **+ New** on the **Device templates** page.
+1. On the **Select type** page, scroll down until you find the **ESP32-Azure IoT Kit** tile in the **Use a pre-configured device template** section.
+1. Select the **ESP32-Azure IoT Kit** tile, and then select **Next: Review**.
+1. On the **Review** page, select **Create**.
+The name of the template you created is **Sensor Controller**. The model includes components such as **Sensor Controller**, **SensorTemp**, and **Device Information interface**. Components define the capabilities of an ESP32 device. Capabilities include the telemetry, properties, and commands.
-1. Go to the **Device Templates** page in your IoT Central application.
-1. Select **+ New** > **IoT device**. Then select **Next: Customize**.
-1. Enter a name for your template, such as **Thermostat**. Then select **Next: Review** and then select **Create**.
-1. IoT Central creates an empty device template and lets you choose to create a custom model from scratch or import a DTDL model.
## Manage a device template
You can rename or delete a template from the template's editor page.
After you've defined the template, you can publish it. Until the template is published, you can't connect a device to it, and it doesn't appear on the **Devices** page.
-To learn more about modifying device templates, see [Edit an existing device template](howto-edit-device-template.md).
+To learn more about modifying and versioning device templates, see [Edit an existing device template](howto-edit-device-template.md).
+
+## Models
-## Create a capability model
+The model defines how your device interacts with your IoT Central application. Customize your model with additional capabilities, add interfaces to inherit capabilities, or add new components that are based on other interfaces.
To create a device model, you can:
To create a device model, you can:
- Import a DTDL model from a JSON file. A device builder might have used Visual Studio Code to author a device model for your application. - Select one of the devices from the Device Catalog. This option imports the device model that the manufacturer has published for this device. A device model imported like this is automatically published.
-## Manage a capability model
+1. To view the model ID, select the root interface in the model and select **Edit identity**:
-After you create a device model, you can:
+ :::image type="content" source="media/howto-set-up-template/view-id.png" alt-text="Screenshot that shows model id for device template root interface.":::
-- Add components to the model. A model must have at least one component.-- Edit model metadata, such as its ID, namespace, and name.-- Delete the model.
+1. To view the component ID, select **Edit Identity** on any of the component interfaces in the model.
-## Create a component
+To learn more, see the [IoT Plug and Play modeling guide](../../iot-pnp/concepts-modeling-guide.md).
-A device model must have at least one default component. A component is a reusable collection of capabilities.
+### Interfaces and components
-To create a component:
+To view and manage the interfaces in your device model:
-1. Go to your device model, and choose **+ Add component**.
+1. Go to **Device Templates** page and select the device template you created. The interfaces are listed in the **Models** section of the device template. The following screenshot shows an example of the **Sensor Controller** root interface in a device template:
-1. On the **Add a component interface** page, you can:
+ :::image type="content" source="media/howto-set-up-template/device-template.png" alt-text="Screenshot that shows root interface for a model":::
- - Create a custom component from scratch.
- - Import an existing component from a DTDL file. A device builder might have used Visual Studio Code to author a component interface for your device.
+1. Select the ellipsis to add an inherited interface or component to the root interface. To learn more about interfaces and component see [multiple components](../../iot-pnp/concepts-modeling-guide.md#multiple-components) in the modeling guide.
-1. After you create a component, choose **Edit identity** to change the display name of the component.
+ :::image type="content" source="media/howto-set-up-template/add-interface.png" alt-text="How to add interface or component ":::
-1. If you choose to create a custom component from scratch, you can add your device's capabilities. Device capabilities are telemetry, properties, and commands.
+1. To export a model or interface select **Export**.
-### Telemetry
+1. To view or edit DTDL for an interface, or a capability select **Edit DTDL**.
-Telemetry is a stream of values sent from the device, typically from a sensor. For example, a sensor might report the ambient temperature.
+### Capabilities
+
+Select **+ Add capability** to add capability to an interface or component. For example, you can add **Target Temperature** capability to a **SensorTemp** component.
++
+#### Telemetry
+
+Telemetry is a stream of values sent from the device, typically from a sensor. For example, a sensor might report the ambient temperature as shown below:
+ The following table shows the configuration settings for a telemetry capability:
The following table shows the configuration settings for a telemetry capability:
| Comment | Any comments about the telemetry capability. | | Description | A description of the telemetry capability. |
-### Properties
+#### Properties
-Properties represent point-in-time values. For example, a device can use a property to report the target temperature it's trying to reach. You can set writable properties from IoT Central.
+Properties represent point-in-time values. You can set writable properties from IoT Central.
+For example, a device can use a writable property to let an operator set the target temperature as shown below:
+ The following table shows the configuration settings for a property capability:
The following table shows the configuration settings for a property capability:
| Comment | Any comments about the property capability. | | Description | A description of the property capability. |
-### Commands
+#### Commands
+
+You can call device commands from IoT Central. Commands optionally pass parameters to the device and receive a response from the device. For example, you can call a command to reboot a device in 10 seconds as shown below:
-You can call device commands from IoT Central. Commands optionally pass parameters to the device and receive a response from the device. For example, you can call a command to reboot a device in 10 seconds.
The following table shows the configuration settings for a command capability:
To learn more about how devices implement commands, see [Telemetry, property, an
You can choose queue commands if a device is currently offline by enabling the **Queue if offline** option for a command in the device template. + This option uses IoT Hub cloud-to-device messages to send notifications to devices. To learn more, see the IoT Hub article [Send cloud-to-device messages](../../iot-hub/iot-hub-devguide-messages-c2d.md). Cloud-to-device messages:
Cloud-to-device messages:
> [!NOTE] > This option is only available in the IoT Central web UI. This setting isn't included if you export a model or component from the device template.
-## Manage a component
-
-Use components to assemble a device template from other interfaces. For example, the device template for a temperature controller could include several thermostat components. Components can be edited directly in the device template or exported and imported as JSON files. Devices can interact with component instances. For example, a device with two thermostats can send telemetry from each thermostat to separate components in your IoT Central application.
-
-## Inheritance
-
-You can extend an interface using inheritance. Use inheritance to add capabilities to existing interfaces. Inherited interfaces are transparent to devices.
-
-## Add cloud properties
+## Cloud properties
Use cloud properties to store information about devices in IoT Central. Cloud properties are never sent to a device. For example, you can use cloud properties to store the name of the customer who has installed the device, or the device's last service date. + The following table shows the configuration settings for a cloud property: | Field | Description |
The following table shows the configuration settings for a cloud property:
| Semantic Type | The semantic type of the property, such as temperature, state, or event. The choice of semantic type determines which of the following fields are available. | | Schema | The cloud property data type, such as double, string, or vector. The available choices are determined by the semantic type. |
-## Add customizations
+## Customizations
+
+Use customizations when you need to modify an imported component or add IoT Central-specific features to a capability. For example, you can change the display name and units of a property as shown below:
++
+The following table shows the configuration settings for customizations:
+
+| Field | Description |
+| -- | -- |
+|Display name | Override display name from model. |
+|Semantic type | Override semantic type from model. |
+|Unit | Override unit from model. |
+|Display unit | Override from model. |
+|Comment | Override from model. |
+|Description | Override from model. |
+|Color | IoT Central specific option. |
+|Min value | Set minimum value - IoT Central specific option. |
+|Max value | Set maximum value - IoT Central specific option. |
+|Decimal places | IoT Central specific option. |
+|Initial value | Commands only IoT Central specific value - default parameter value. |
+
+## Views
-Use customizations when you need to modify an imported component or add IoT Central-specific features to a capability. You can customize any part of an existing device template's capabilities.
+Views let you define views and forms that let an operator monitor and interact with a device. Views use visualizations such as charts to show telemetry and property values.
-### Generate default views
+Generating default views is a quick way to visualize your important device information. The three default views are:
-Generating default views is a quick way to visualize your important device information. You have up to three default views generated for your device template:
+### Default views
- **Commands**: A view with device commands, and allows your operator to dispatch them to your device. - **Overview**: A view with device telemetry, displaying charts and metrics. - **About**: A view with device information, displaying device properties.
-After you've selected **Generate default views**, you see that they've been automatically added under the **Views** section of your device template.
+After you've selected **Generate default views**, they're automatically added under the **Views** section of your device template.
-## Add views
+### Custom views
-Add views to a device template to enable operators to visualize a device by using charts and metrics. You can have multiple views for a device template.
+Add views to a device template to enable operators to visualize a device by using charts and metrics. You can add your own custom views to a device template.
To add a view to a device template: 1. Go to your device template, and select **Views**.
-1. Choose **Visualizing the Device**.
+1. Select **Visualizing the Device**.
1. Enter a name for your view in **View name**.
-1. Add tiles to your view from the list of static, property, cloud property, telemetry, and command tiles. Drag and drop the tiles you want to add to your view.
-1. To plot multiple telemetry values on a single chart tile, select the telemetry values, and then select **Combine**.
-1. Configure each tile you add to customize how it displays data. Access this option by selecting the gear icon, or by selecting **Change configuration** on your chart tile.
-1. Arrange and resize the tiles on your view.
-1. Save the changes.
+1. Select **Start with a visual** under add tiles and choose the type of visual for your tile. Then either select **Add tile** or drag and drop the visual onto the canvas. To configure the tile, select the gear icon.
+
-### Configure preview device to view
-To view and test your view, select **Configure preview device**. This feature lets you see the view as your operator sees it after it's published. Use this feature to validate that your views show the correct data. You can choose from the following options:
+To test your view, select **Configure preview device**. This feature lets you see the view as an operator sees it after it's published. Use this feature to validate that your views show the correct data. Choose from the following options:
- No preview device. - The real test device you've configured for your device template. - An existing device in your application, by using the device ID.
-## Add forms
+### Forms
Add forms to a device template to enable operators to manage a device by viewing and setting properties. Operators can only edit cloud properties and writable device properties. You can have multiple forms for a device template.
-To add a form to a device template:
+1. Select the **Views** node, and then select the **Editing device and cloud data** tile to add a new view.
+
+1. Change the form name to **Manage device**.
+
+1. Select the **Customer Name** and **Last Service Date** cloud properties, and the **Target Temperature** property. Then select **Add section**.
+
+1. Select **Save** to save your new form.
+
-1. Go to your device template, and select **Views**.
-1. Choose **Editing Device and Cloud data**.
-1. Enter a name for your form in **Form Name**.
-1. Select the number of columns to use to lay out your form.
-1. Add properties to an existing section on your form, or select properties and choose **Add Section**. Use sections to group properties on your form. You can add a title to a section.
-1. Configure each property on the form to customize its behavior.
-1. Arrange the properties on your form.
-1. Save the changes.
## Publish a device template Before you can connect a device that implements your device model, you must publish your device template.
-To learn more about modifying a device template it's published, see [Edit an existing device template](howto-edit-device-template.md).
- To publish a device template, go to you your device template, and select **Publish**. After you publish a device template, an operator can go to the **Devices** page, and add either real or simulated devices that use your device template. You can continue to modify and save your device template as you're making changes. When you want to push these changes out to the operator to view under the **Devices** page, you must select **Publish** each time.
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-developer.md
A device model is defined using the [DTDL](https://github.com/Azure/opendigitalt
A DTDL model can be a _no-component_ or a _multi-component_ model: -- No-component model: A simple model doesn't use embedded or cascaded components. All the telemetry, properties, and commands are defined a single _default component_. For an example, see the [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) model.-- Multi-component model. A more complex model that includes two or more components. These components include a single default component, and one or more additional nested components. For an example, see the [Temperature Controller](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) model.
+- No-component model: A simple model doesn't use embedded or cascaded components. All the telemetry, properties, and commands are defined a single _root component_. For an example, see the [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) model.
+- Multi-component model. A more complex model that includes two or more components. These components include a single root component, and one or more additional nested components. For an example, see the [Temperature Controller](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/TemperatureController.json) model.
To learn more, see [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md)
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-solution-builder.md
An IoT Central application can have one or more dashboards that operators use to
- To view some examples of customized dashboards, see [Industry focused templates](concepts-app-templates.md#industry-focused-templates). - To learn more about dashboards, see [Create and manage multiple dashboards](howto-manage-dashboards.md) and [Configure the application dashboard](howto-manage-dashboards.md).
-When a device connects to an IoT Central, the device is associated with a device template for the device type. A device template has customizable views that an operator uses to manage individual devices. As a solution developer, you can create and customize the available views for a device type. To learn more, see [Add views](howto-set-up-template.md#add-views).
+When a device connects to an IoT Central, the device is associated with a device template for the device type. A device template has customizable views that an operator uses to manage individual devices. As a solution developer, you can create and customize the available views for a device type. To learn more, see [Add views](howto-set-up-template.md#views).
## Use built-in rules and analytics
iot-edge How To Auto Provision Symmetric Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-auto-provision-symmetric-keys.md
Previously updated : 03/01/2021 Last updated : 07/21/2021
Azure IoT Edge devices can be auto-provisioned using the [Device Provisioning Se
This article shows you how to create a Device Provisioning Service individual or group enrollment using symmetric key attestation on an IoT Edge device with the following steps: * Create an instance of IoT Hub Device Provisioning Service (DPS).
-* Create an enrollment for the device.
+* Create an individual or group enrollment.
* Install the IoT Edge runtime and connect to the IoT Hub. :::moniker range=">=iotedge-2020-11"
Create a new instance of the IoT Hub Device Provisioning Service in Azure, and l
After you have the Device Provisioning Service running, copy the value of **ID Scope** from the overview page. You use this value when you configure the IoT Edge runtime.
-## Choose a unique registration ID for the device
+## Choose a unique device registration ID
-A unique registration ID must be defined to identify each device. You can use the MAC address, serial number, or any unique information from the device.
+A unique registration ID must be defined to identify each device. You can use the MAC address, serial number, or any unique information from the device. For example, you could use a combination of a MAC address and serial number forming the following string for a registration ID: `sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6`. Valid characters are lowercase alphanumeric and dash (`-`).
-In this example, we use a combination of a MAC address and serial number forming the following string for a registration ID: `sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6`.
+## Option 1: Create a DPS individual enrollment
-Create a unique registration ID for your device. Valid characters are lowercase alphanumeric and dash ('-').
-
-## Create a DPS enrollment
-
-Use your device's registration ID to create an individual enrollment in DPS.
+Create an individual enrollment to provision a single device through DPS.
When you create an enrollment in DPS, you have the opportunity to declare an **Initial Device Twin State**. In the device twin, you can set tags to group devices by any metric you need in your solution, like region, environment, location, or device type. These tags are used to create [automatic deployments](how-to-deploy-at-scale.md). > [!TIP]
-> Group enrollments are also possible when using symmetric key attestation and involve the same decisions as individual enrollments.
+> The steps in this article are for the Azure portal, but you can also make create individual enrollments using the Azure CLI. For more information, see [az iot dps enrollment](/cli/azure/iot/dps/enrollment). As part of the CLI command, use the **edge-enabled** flag to specify that the enrollment is for an IoT Edge device.
1. In the [Azure portal](https://portal.azure.com), navigate to your instance of IoT Hub Device Provisioning Service.
When you create an enrollment in DPS, you have the opportunity to declare an **I
1. For **Mechanism**, select **Symmetric Key**.
- 1. Select the **Auto-generate keys** check box.
+ 1. Provide a unique **Registration ID** for your device.
- 1. Provide the **Registration ID** that you created for your device.
+ 1. Optionally, provide an **IoT Hub Device ID** for your device. You can use device IDs to target an individual device for module deployment. If you don't provide a device ID, the registration ID is used.
- 1. Provide an **IoT Hub Device ID** for your device if you'd like. You can use device IDs to target an individual device for module deployment. If you don't provide a device ID, the registration ID is used.
+ 1. Select **True** to declare that the enrollment is for an IoT Edge device.
- 1. Select **True** to declare that the enrollment is for an IoT Edge device. For a group enrollment, all devices must be IoT Edge devices or none of them can be.
+ 1. Optionally, add a tag value to the **Initial Device Twin State**. You can use tags to target groups of devices for module deployment. For example:
+
+ ```json
+ {
+ "tags": {
+ "environment": "test"
+ },
+ "properties": {
+ "desired": {}
+ }
+ }
+ ```
- > [!TIP]
- > In the Azure CLI, you can create an [enrollment](/cli/azure/iot/dps/enrollment) or an [enrollment group](/cli/azure/iot/dps/enrollment-group) and use the **edge-enabled** flag to specify that a device, or group of devices, is an IoT Edge device.
+ 1. Select **Save**.
- 1. Accept the default value from the Device Provisioning Service's allocation policy for **how you want to assign devices to hubs** or choose a different value that is specific to this enrollment.
+1. Copy the individual enrollment's **Primary Key** value to use when installing the IoT Edge runtime.
- 1. Choose the linked **IoT Hub** that you want to connect your device to. You can choose multiple hubs, and the device will be assigned to one of them according to the selected allocation policy.
+Now that an enrollment exists for this device, the IoT Edge runtime can automatically provision the device during installation.
- 1. Choose **how you want device data to be handled on re-provisioning** when devices request provisioning after the first time.
+## Option 2: Create a DPS enrollment group
- 1. Add a tag value to the **Initial Device Twin State** if you'd like. You can use tags to target groups of devices for module deployment. For example:
+Use your device's registration ID to create an individual enrollment in DPS.
+
+When you create an enrollment in DPS, you have the opportunity to declare an **Initial Device Twin State**. In the device twin, you can set tags to group devices by any metric you need in your solution, like region, environment, location, or device type. These tags are used to create [automatic deployments](how-to-deploy-at-scale.md).
+
+> [!TIP]
+> The steps in this article are for the Azure portal, but you can also make create individual enrollments using the Azure CLI. For more information, see [az iot dps enrollment-group](/cli/azure/iot/dps/enrollment-group). As part of the CLI command, use the **edge-enabled** flag to specify that the enrollment is for IoT Edge devices. For a group enrollment, all devices must be IoT Edge devices or none of them can be.
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your instance of IoT Hub Device Provisioning Service.
+
+1. Under **Settings**, select **Manage enrollments**.
+
+1. Select **Add individual enrollment** then complete the following steps to configure the enrollment:
+
+ 1. Provide a **Group name**.
+
+ 1. Select **Symmetric Key** as the attestation type.
+
+ 1. Select **True** to declare that the enrollment is for an IoT Edge device. For a group enrollment, all devices must be IoT Edge devices or none of them can be.
+
+ 1. Optionally, add a tag value to the **Initial Device Twin State**. You can use tags to target groups of devices for module deployment. For example:
```json {
When you create an enrollment in DPS, you have the opportunity to declare an **I
} ```
- 1. Ensure **Enable entry** is set to **Enable**.
- 1. Select **Save**.
-Now that an enrollment exists for this device, the IoT Edge runtime can automatically provision the device during installation. Be sure to copy your enrollment's **Primary Key** value to use when installing the IoT Edge runtime, or if you're going to be creating device keys for use with a group enrollment.
+1. Copy your enrollment group's **Primary Key** value to use when creating device keys for use with a group enrollment.
-## Derive a device key
+Now that an enrollment group exists, the IoT Edge runtime can automatically provision devices during installation.
-> [!NOTE]
-> This section is required only if using a group enrollment.
+### Derive a device key
-Each device uses its derived device key with your unique registration ID to perform symmetric key attestation with the enrollment during provisioning. To generate the device key, use the key you copied from your DPS enrollment to compute an [HMAC-SHA256](https://wikipedia.org/wiki/HMAC) of the unique registration ID for the device and convert the result into Base64 format.
+Each device that is provisioned as part of a group enrollment needs a derived device key to perform symmetric key attestation with the enrollment during provisioning.
+
+To generate a device key, use the key that you copied from your DPS enrollment group to compute an [HMAC-SHA256](https://wikipedia.org/wiki/HMAC) of the unique registration ID for the device and convert the result into Base64 format.
Do not include your enrollment's primary or secondary key in your device code.
-### Linux workstations
+#### Derive a key on Linux
-If you are using a Linux workstation, you can use openssl to generate your derived device key as shown in the following example.
+On Linux, you can use openssl to generate your derived device key as shown in the following example.
Replace the value of **KEY** with the **Primary Key** you noted earlier. Replace the value of **REG_ID** with your device's registration ID. ```bash
-KEY=8isrFI1sGsIlvvFSSFRiMfCNzv21fjbE/+ah/lSh3lF8e2YG1Te7w1KpZhJFFXJrqYKi9yegxkqIChbqOS9Egw==
-REG_ID=sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6
+KEY=PASTE_YOUR_ENROLLMENT_KEY_HERE
+REG_ID=PASTE_YOUR_REGISTRATION_ID_HERE
keybytes=$(echo $KEY | base64 --decode | xxd -p -u -c 1000) echo -n $REG_ID | openssl sha256 -mac HMAC -macopt hexkey:$keybytes -binary | base64
echo -n $REG_ID | openssl sha256 -mac HMAC -macopt hexkey:$keybytes -binary | ba
Jsm0lyGpjaVYVP2g3FnmnmG9dI/9qU24wNoykUmermc= ```
-### Windows-based workstations
+#### Derive a key on Windows
-If you are using a Windows-based workstation, you can use PowerShell to generate your derived device key as shown in the following example.
+On Windows, you can use PowerShell to generate your derived device key as shown in the following example.
Replace the value of **KEY** with the **Primary Key** you noted earlier. Replace the value of **REG_ID** with your device's registration ID. ```powershell
-$KEY='8isrFI1sGsIlvvFSSFRiMfCNzv21fjbE/+ah/lSh3lF8e2YG1Te7w1KpZhJFFXJrqYKi9yegxkqIChbqOS9Egw=='
-$REG_ID='sn-007-888-abc-mac-a1-b2-c3-d4-e5-f6'
+$KEY='PASTE_YOUR_ENROLLMENT_KEY_HERE'
+$REG_ID='PASTE_YOUR_REGISTRATION_ID_HERE'
$hmacsha256 = New-Object System.Security.Cryptography.HMACSHA256 $hmacsha256.key = [Convert]::FromBase64String($KEY)
Have the following information ready:
* The DPS **ID Scope** value * The device **Registration ID** you created
-* The **Primary Key** you copied from the DPS enrollment
-
-> [!TIP]
-> For group enrollments, you need each device's [derived key](#derive-a-device-key) rather than the DPS enrollment primary key.
+* Either the **Primary Key** from an individual enrollment, or a [derived key](#derive-a-device-key) for devices using a group enrollment.
# [Linux](#tab/linux) <!-- 1.1 --> :::moniker range="iotedge-2018-06"+ 1. Open the configuration file on the IoT Edge device. ```bash
Have the following information ready:
provisioning: source: "dps" global_endpoint: "https://global.azure-devices-provisioning.net"
- scope_id: "<SCOPE_ID>"
+ scope_id: "PASTE_YOUR_SCOPE_ID_HERE"
attestation: method: "symmetric_key"
- registration_id: "<REGISTRATION_ID>"
- symmetric_key: "<SYMMETRIC_KEY>"
+ registration_id: "PASTE_YOUR_REGISTRATION_ID_HERE"
+ symmetric_key: "PASTE_YOUR_PRIMARY_KEY_OR_DERIVED_KEY_HERE"
# always_reprovision_on_startup: true # dynamic_reprovisioning: false ```
Have the following information ready:
[provisioning] source = "dps" global_endpoint = "https://global.azure-devices-provisioning.net"
- id_scope = "<SCOPE_ID>"
+ id_scope = "PASTE_YOUR_SCOPE_ID_HERE"
[provisioning.attestation] method = "symmetric_key"
- registration_id = "<REGISTRATION_ID>"
+ registration_id = "PASTE_YOUR_REGISTRATION_ID_HERE"
- symmetric_key = "<PRIMARY_KEY OR DERIVED_KEY>"
+ symmetric_key = "PASTE_YOUR_PRIMARY_KEY_OR_DERIVED_KEY_HERE"
``` 1. Update the values of `id_scope`, `registration_id`, and `symmetric_key` with your DPS and device information.
You can use either PowerShell or Windows Admin Center to provision your IoT Edge
For PowerShell, run the following command with the placeholder values updated with your own values: ```powershell
-Provision-EflowVm -provisioningType DpsSymmetricKey -ΓÇïscopeId <ID_SCOPE_HERE> -registrationId <REGISTRATION_ID_HERE> -symmKey <PRIMARY_KEY_HERE>
+Provision-EflowVm -provisioningType DpsSymmetricKey -ΓÇïscopeId PASTE_YOUR_ID_SCOPE_HERE -registrationId PASTE_YOUR_REGISTRATION_ID_HERE -symmKey PASTE_YOUR_PRIMARY_KEY_OR_DERIVED_KEY_HERE
``` ### Windows Admin Center
For Windows Admin Center, use the following steps:
1. In the [Azure portal](https://ms.portal.azure.com/), navigate to your DPS instance.
-1. On the **Overview** tab, copy the **ID Scope** value. Paste it into the scope ID field in the Windows Admin Center.
-
-1. On the **Manage enrollments** tab in the Azure portal, select the enrollment you created. Copy the **Primary Key** value in the enrollment details. Paste it into the symmetric key field in the Windows Admin Center.
-
-1. Provide the registration ID of your device in the registration ID field in the Windows Admin Center.
+1. Provide your DPS scope ID, device registration ID, and enrollment primary key or derived key in the Windows Admin Center fields.
1. Choose **Provisioning with the selected method**.
load-balancer Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-outbound-connections.md
# Using Source Network Address Translation (SNAT) for outbound connections
-Certain scenarios require virtual machines or compute instances to have outbound connectivity to the internet. The frontend IPs of an Azure public load balancer can be used to provide outbound connectivity to the internet for backend instances. This configuration uses **source network address translation (SNAT)** as the **source** or virtual machine's IP is translated to a public IP address. SNAT maps the IP address of the backend to the public IP address of your load balancer. SNAT prevents outside sources from having a direct address to the backend instances.
+Certain scenarios require virtual machines or compute instances to have outbound connectivity to the internet. The frontend IPs of an Azure public load balancer can be used to provide outbound connectivity to the internet for backend instances. This configuration uses **source network address translation (SNAT)** as the **source** or virtual machine's private IP is translated to a public IP address. SNAT maps the IP address of the backend to the public IP address of your load balancer. SNAT prevents outside sources from having a direct address to the backend instances.
## <a name="scenarios"></a>Azure's outbound connectivity methods
Outbound connectivity to the internet can be enabled in the following ways:
| 2 | Associating a NAT gateway to the subnet | Static, explicit | Yes | Best | | 3 | Assigning a Public IP to the Virtual Machine | Static, explicit | Yes | OK | | 4 | Using the frontend IP address(es) of a Load Balancer for outbound (and inbound) | Implicit | No | Second worst |
-| 5 | Using default SNAT for outbound. | Default | No | Worst |
+| 5 | Using default outbound access | Implicit | No | Worst |
## <a name="outboundrules"></a>Using the frontend IP address of a load balancer for outbound via outbound rules
For more information about Azure Virtual Network NAT, see [What is Azure Virtual
## <a name="snat"></a> Using the frontend IP address of a load balancer for outbound (and inbound) >[!NOTE]
-> This method is **NOT recommended** for production workloads as it adds risk of exhausting ports. Please refrain from using this method for production workloads to avoid potential connection failures.
+> This method is **NOT recommended** for production workloads as it adds risk of exhausting ports. Please refrain from using this method for production workloads to avoid potential connection failures due to SNAT port exhaustion.
A resource in the backend of a load balancer without:
A resource in the backend of a load balancer without:
* Instance level public IP address * NAT gateway configured
-Creates outbound connections via the frontend IP of the load balancer.
+creates outbound connections via the frontend IP of the load balancer and leverages default SNAT (also known as Default Outbound Access).
+
+## Default outbound access
A resource without: * Outbound rules * Instance level public IP address * NAT gateway configured
-* In the backend of a load balancer.
+* a load balancer
-Creates outbound connections via default SNAT.
+creates outbound connections via default SNAT. This is known as Default Outbound Access. Another example of a scenario using default SNAT is that a virtual machine in Azure (without associatedions mentioned above). In this case outbound connectivity is provided by the Default Outbound Access IP. This is a dynamic IP assigned by Azure that you cannot control. Default SNAT is not recommended for production workloads
### What are SNAT ports?
The following <a name="snatporttable"></a>table shows the SNAT port preallocatio
## Next steps * [Troubleshoot outbound connection failures because of SNAT exhaustion](./troubleshoot-outbound-connection.md)
-* [Review SNAT metrics](./load-balancer-standard-diagnostics.md#how-do-i-check-my-snat-port-usage-and-allocation) and familiarize yourself with the correct way to filter, split, and view them.
+* [Review SNAT metrics](./load-balancer-standard-diagnostics.md#how-do-i-check-my-snat-port-usage-and-allocation) and familiarize yourself with the correct way to filter, split, and view them.
load-balancer Load Balancer Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/load-balancer-troubleshoot.md
When the external clients to the backend VMs go through the load balancer, the I
**Validation and resolution**
-Standard ILBs are **secure by default**. Basic ILBs allowed connecting to the internet via a *hidden* Public IP address called the default outbound access IP. This isn't recommended for production workloads as the IP address is neither static nor locked down via NSGs that you own. If you recently moved from a Basic ILB to a Standard ILB, you should create a Public IP explicitly via [Outbound only](egress-only.md) configuration, which locks down the IP via NSGs. You can also use a [NAT Gateway](../virtual-network/nat-gateway/nat-overview.md) on your subnet. NAT Gateway is the reccomended solution for outbound.
+Standard ILBs are **secure by default**. Basic ILBs allowed connecting to the internet via a *hidden* Public IP address called the default outbound access IP. This isn't recommended for production workloads as the IP address is neither static nor locked down via NSGs that you own. If you recently moved from a Basic ILB to a Standard ILB, you should create a Public IP explicitly via [Outbound only](egress-only.md) configuration, which locks down the IP via NSGs. You can also use a [NAT Gateway](../virtual-network/nat-gateway/nat-overview.md) on your subnet. NAT Gateway is the recommended solution for outbound.
## Can't change backend port for existing LB rule of a load balancer that has virtual machine scale set deployed in the backend pool.
machine-learning Concept Designer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-designer.md
Real-time endpoints must be deployed to an Azure Kubernetes Service cluster.
To learn how to deploy your model, see [Tutorial: Deploy a machine learning model with the designer](tutorial-designer-automobile-price-deploy.md). + ## Publish You can also publish a pipeline to a **pipeline endpoint**. Similar to a real-time endpoint, a pipeline endpoint lets you submit new pipeline runs from external applications using REST calls. However, you cannot send or receive data in real time using a pipeline endpoint.
machine-learning Concept Plan Manage Cost https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-plan-manage-cost.md
When you create resources for an Azure Machine Learning workspace, resources for
* [Key Vault](https://azure.microsoft.com/pricing/details/key-vault?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) * [Application Insights](https://azure.microsoft.com/en-us/pricing/details/monitor?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
+### Costs might accrue before resource deletion
+
+Before you delete an Azure Machine Learning workspace in the Azure portal or with Azure CLI, the following sub resources are common costs that accumulate even when you are not actively working in the workspace. If you are planning on returning to your Azure Machine Learning workspace at a later time, these resources may continue to accrue costs.
+
+* VMs
+* Load Balancer
+* Virtual Network
+* Bandwidth
+
+Each VM is billed per hour it is running. Cost depends on VM specifications. VMs that are running but not actively working on a dataset will still be charged via the load balancer. For each compute instance, one load balancer will be billed per day. Every 50 nodes of a compute cluster will have one standard load balancer billed. Each load balancer is billed around $0.33/day. To avoid load balancer costs on stopped compute instances and compute clusters, delete the compute resource. One virtual network will be billed per subscription and per region. Virtual networks cannot span regions or subscriptions. Setting up private endpoints in vNet setups may also incur charges. Bandwidth is charged by usage; the more data transferred, the more you are charged.
+ ### Costs might accrue after resource deletion After you delete an Azure Machine Learning workspace in the Azure portal or with Azure CLI, the following resources continue to exist. They continue to accrue costs until you delete them.
Use the following tips to help you manage and optimize your compute resource cos
- Parallelize training - Set data retention and deletion policies - Deploy resources to the same region
+- Delete instances and clusters if you do not plan on using them in the near future.
For more information, see [manage and optimize costs in Azure Machine Learning](how-to-manage-optimize-cost.md).
For more information, see [manage and optimize costs in Azure Machine Learning](
- Learn [how to optimize your cloud investment with Azure Cost Management](../cost-management-billing/costs/cost-mgt-best-practices.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).-- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
+- Take the [Cost Management](/learn/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-and-where.md
The workflow is similar no matter where you deploy your model:
For more information on the concepts involved in the machine learning deployment workflow, see [Manage, deploy, and monitor models with Azure Machine Learning](concept-model-management-and-deployment.md). + ## Prerequisites # [Azure CLI](#tab/azcli)
machine-learning How To Deploy Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-azure-kubernetes-service.md
When deploying to Azure Kubernetes Service, you deploy to an AKS cluster that is
> > You can also refer to Azure Machine Learning - [Deploy to Local Notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/deployment/deploy-to-local) + ## Prerequisites - An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
machine-learning How To Deploy Inferencing Gpus https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-inferencing-gpus.md
This article teaches you how to use Azure Machine Learning to deploy a GPU-enabl
Inference, or model scoring, is the phase where the deployed model is used to make predictions. Using GPUs instead of CPUs offers performance advantages on highly parallelizable computation. + > [!IMPORTANT] > For web service deployments, GPU inference is only supported on Azure Kubernetes Service. For inference using a __machine learning pipeline__, GPUs are only supported on Azure Machine Learning Compute. For more information on using ML pipelines, see [Tutorial: Build an Azure Machine Learning pipeline for batch scoring](tutorial-pipeline-batch-scoring-classification.md).
machine-learning How To Deploy Local Container Notebook Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-local-container-notebook-vm.md
Learn how to use Azure Machine Learning to deploy a model as a web service on yo
> [!TIP] > Deploying a model from a Jupyter Notebook on a compute instance, to a web service on the same VM is a _local deployment_. In this case, the 'local' computer is the compute instance. For more information on deployments, see [Deploy models with Azure Machine Learning](how-to-deploy-and-where.md). + ## Prerequisites - An Azure Machine Learning workspace with a compute instance running. For more information, see [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md).
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-mlflow-models.md
In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model
Azure Machine Learning offers deployment configurations for: * Azure Container Instance (ACI) which is a suitable choice for a quick dev-test deployment. * Azure Kubernetes Service (AKS) which is recommended for scalable production deployments.++ > [!TIP] > The information in this document is primarily for data scientists and developers who want to deploy their MLflow model to an Azure Machine Learning web service endpoint. If you are an administrator interested in monitoring resource usage and events from Azure Machine Learning, such as quotas, completed training runs, or completed model deployments, see [Monitoring Azure Machine Learning](monitor-azure-machine-learning.md).
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-with-triton.md
az ml model deploy -n triton-webservice -m triton_model:1 --dc deploymentconfig.
See [this documentation for more details on deploying models](how-to-deploy-and-where.md). + ### Call into your deployed model First, get your scoring URI and bearer tokens.
machine-learning How To Run Batch Predictions Designer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-run-batch-predictions-designer.md
In this how-to, you learn to do the following tasks:
To learn how to set up batch scoring services using the SDK, see the accompanying [how-to](./tutorial-pipeline-batch-scoring-classification.md). + ## Prerequisites This how-to assumes you already have a training pipeline. For a guided introduction to the designer, complete [part one of the designer tutorial](tutorial-designer-automobile-price-train-score.md).
media-services Spatial Analysis Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/live-video-analytics-edge/spatial-analysis-tutorial.md
The MediaGraphCognitiveServicesVisionExtension node plays the role of a proxy. I
## Create the Computer Vision resource
-You need to create an Azure resource of type Computer Vision either on [Azure portal](../../iot-edge/how-to-deploy-modules-portal.md) or via Azure CLI. You will be able to create the resource once your request for access to the container has been approved and your Azure Subscription ID has been registered. Go to https://aka.ms/csgate to submit your use case and your Azure Subscription ID. You need to create the Azure resource using the same Azure subscription that has been provided on the Request for Access form.
+You need to create an Azure resource of type Computer Vision either on [Azure portal](../../iot-edge/how-to-deploy-modules-portal.md) or via Azure CLI.
### Gathering required parameters
migrate Deploy Appliance Script Government https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/deploy-appliance-script-government.md
Last updated 03/13/2021
-# Set up an appliance in Azure Government
+# Set up an appliance for Azure Government cloud
-Follow this article to deploy an [Azure Migrate appliance](./migrate-appliance-architecture.md) for servers on VMware environment, servers on Hyper-V, and physical servers, in an Azure Government cloud. You run a script to create the appliance, and verify that it can connect to Azure. If you want to set up an appliance in the public cloud, follow [this article](deploy-appliance-script.md).
+Follow this article to deploy an [Azure Migrate appliance](./migrate-appliance-architecture.md) for Azure Government cloud to perform:
+- discovery, assessment and agentless replication of servers running in VMware environment
+- discovery and assessment of servers running in Hyper-V environment
+- discovery and assessment of physical servers or servers running on other clouds like AWS, GCP, Xen etc.
-> [!NOTE]
-> The option to deploy an appliance using a template (for servers on VMware environment and Hyper-V environment) isn't supported in Azure Government.
+If you want to set up an appliance in the public cloud, follow [this article](deploy-appliance-script.md).
+> [!NOTE]
+> The option to deploy an appliance using a template (for servers running in VMware or Hyper-V environment) isn't supported in Azure Government. You need to use the installer script only.
## Prerequisites
-The script sets up the Azure Migrate appliance on an existing physical server or on a virtualized server.
+You can use the script to deploy the Azure Migrate appliance on an existing physical or a virtualized server.
-- The server that will act as the appliance must be running Windows Server 2016, with 32 GB of memory, eight vCPUs, around 80 GB of disk storage, and an external virtual switch. It requires a static or dynamic IP address. -- Before you deploy the appliance, review detailed appliance requirements for [servers on VMware](migrate-appliance.md#appliancevmware), [on Hyper-V](migrate-appliance.md#appliancehyper-v), and [physical servers](migrate-appliance.md#appliancephysical).-- Don't run the script on an existing Azure Migrate appliance.
+- The server that will act as the appliance must be running Windows Server 2016 and meet other requirements for [VMware](migrate-appliance.md#appliancevmware), [Hyper-V](migrate-appliance.md#appliancehyper-v), and [physical servers](migrate-appliance.md#appliancephysical).
+- If you run the script on a server with Azure Migrate appliance already set up, you can choose to clean up the existing configuration and set up a fresh appliance of the desired configuration. When you execute the script, you will get a notification as shown below:
+
+ :::image type="content" source="./media/deploy-appliance-script/script-reconfigure-appliance.png" alt-text="Screenshot that shows how to reconfigure an appliance.":::
## Set up the appliance for VMware
-To set up the appliance for VMware you download a zipped file from the Azure portal, and extract the contents. You run the PowerShell script to launch the appliance web app. You set up the appliance and configure it for the first time. Then, you register the appliance with the project.
+1. To set up the appliance, you download the zipped file named AzureMigrateInstaller.zip either from the portal or from [here](https://go.microsoft.com/fwlink/?linkid=2140334).
+1. Extract the contents on the server where you want to deploy the appliance.
+1. Execute the PowerShell script to launch the appliance configuration manager.
+1. Set up the appliance and configure it for the first time.
### Download the script 1. In **Migration Goals** > **Windows, Linux and SQL Servers** > **Azure Migrate: Discovery and assessment**, click **Discover**. 2. In **Discover server** > **Are your servers virtualized?**, select **Yes, with VMware vSphere hypervisor**.
+3. Provide an appliance name and generate a project key in the portal.
3. Click **Download**, to download the zipped file.
-### Verify file security
+### Verify security
Check that the zipped file is secure, before you deploy it. 1. On the server to which you downloaded the file, open an administrator command window.
-2. Run the following command to generate the hash for the zipped file
+2. Run the following command to generate the hash for the zipped file:
- ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]```
- - Example: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller-VMWare-USGov.zip SHA256```
+ - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ```
+3. Verify the latest appliance version and hash value:
-3. Verify the latest appliance version and hash value:
+ **Download** | **Hash value**
+ |
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2140337) | 15a94b637a39c53ac91a2d8b21cc3cca8905187e4d9fb4d895f4fa6fd2f30b9f
- **Algorithm** | **Download** | **SHA256**
- | |
- VMware (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140337) | 2daaa2a59302bf911e8ef195f8add7d7c8352de77a9af0b860e2a627979085ca
+> [!NOTE]
+> The same script can be used to set up VMware appliance for Azure Government cloud with either public or private endpoint connectivity.
### Run the script
-Here's what the script does:
+1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
+2. Launch PowerShell on the above server with administrative (elevated) privilege.
+3. Change the PowerShell directory to the folder where the contents have been extracted from the downloaded zipped file.
+4. Run the script named **AzureMigrateInstaller.ps1** by running the following command:
-- Installs agents and a web application.-- Installs Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.-- Downloads and installs an IIS rewritable module.-- Updates a registry key (HKLM), with persistent settings for Azure Migrate.-- Creates log and configuration files as follows:
- - **Config Files**: %ProgramData%\Microsoft Azure\Config
- - **Log Files**: %ProgramData%\Microsoft Azure\Logs
-
-To run the script:
-
-1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
-2. Launch PowerShell on the server, with administrator (elevated) privileges.
-3. Change the PowerShell directory to the folder containing the contents extracted from the downloaded zipped file.
-4. Run the script **AzureMigrateInstaller.ps1**, as follows:
- ``` PS C:\Users\Administrators\Desktop\AzureMigrateInstaller-VMWare-USGov>.\AzureMigrateInstaller.ps1 ```
-1. After the script runs successfully, it launches the appliance web application so that you can set up the appliance. If you encounter any issues, review the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log.
+ ``` PS C:\Users\administrator\Desktop\AzureMigrateInstaller> .\AzureMigrateInstaller.ps1 ```
+
+5. Select from the scenario, cloud and connectivity options to deploy an appliance with the desired configuration. For instance, the selection shown below sets up an appliance to discover, assess and migrate **servers running in your VMware environment** to an Azure Migrate project with **default _(public endpoint)_ connectivity** on **Azure Government cloud**.
+
+ :::image type="content" source="./media/deploy-appliance-script-government/script-vmware-gov-inline.png" alt-text="Screenshot that shows how to set up appliance with desired configuration for Vmware." lightbox="./media/deploy-appliance-script-government/script-vmware-gov-expanded.png":::
+
+6. The installer script does the following:
+
+- Installs agents and a web application.
+- Install Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.
+- Download and installs an IIS rewritable module.
+- Updates a registry key (HKLM) with persistent setting details for Azure Migrate.
+- Creates the following files under the path:
+ - **Config Files**: %Programdata%\Microsoft Azure\Config
+ - **Log Files**: %Programdata%\Microsoft Azure\Logs
+
+After the script has executed successfully, the appliance configuration manager will be launched automatically.
+
+> [!NOTE]
+> If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
### Verify access
Make sure that the appliance can connect to Azure URLs for [government clouds](m
## Set up the appliance for Hyper-V
-To set up the appliance for Hyper-V you download a zipped file from the Azure portal, and extract the contents. You run the PowerShell script to launch the appliance web app. You set up the appliance and configure it for the first time. Then, you register the appliance with the project.
+1. To set up the appliance, you download the zipped file named AzureMigrateInstaller.zip either from the portal or from [here](https://go.microsoft.com/fwlink/?linkid=2140334).
+1. Extract the contents on the server where you want to deploy the appliance.
+1. Execute the PowerShell script to launch the appliance configuration manager.
+1. Set up the appliance and configure it for the first time.
### Download the script 1. In **Migration Goals** > **Windows, Linux and SQL Servers** > **Azure Migrate: Discovery and assessment**, click **Discover**. 2. In **Discover servers** > **Are your servers virtualized?**, select **Yes, with Hyper-V**.
-3. Click **Download**, to download the zipped file.
+3. Provide an appliance name and generate a project key in the portal.
+3. Click **Download**, to download the zipped file.
-
-### Verify file security
+### Verify security
Check that the zipped file is secure, before you deploy it. 1. On the server to which you downloaded the file, open an administrator command window.
-2. Run the following command to generate the hash for the zipped file
+2. Run the following command to generate the hash for the zipped file:
- ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]```
- - Example: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller-HyperV-USGov.zip SHA256```
-
-3. Verify the latest appliance version and hash value:
+ - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ```
+3. Verify the latest appliance version and hash value:
- **Scenario** | **Download** | **SHA256**
- | |
- Hyper-V (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140424) | db5311de3d1d4a1167183a94e8347456db9c5749c7332ff2eb4b777798765e48
+ **Download** | **Hash value**
+ |
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2140424) | 15a94b637a39c53ac91a2d8b21cc3cca8905187e4d9fb4d895f4fa6fd2f30b9f
-
+> [!NOTE]
+> The same script can be used to set up Hyper-V appliance for Azure Government cloud with either public or private endpoint connectivity.
### Run the script
-Here's what the script does:
+1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
+2. Launch PowerShell on the above server with administrative (elevated) privilege.
+3. Change the PowerShell directory to the folder where the contents have been extracted from the downloaded zipped file.
+4. Run the script named **AzureMigrateInstaller.ps1** by running the following command:
-- Installs agents and a web application.-- Installs Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.-- Downloads and installs an IIS rewritable module.-- Updates a registry key (HKLM), with persistent settings for Azure Migrate.-- Creates log and configuration files as follows:
- - **Config Files**: %ProgramData%\Microsoft Azure\Config
- - **Log Files**: %ProgramData%\Microsoft Azure\Logs
+
+ ``` PS C:\Users\administrator\Desktop\AzureMigrateInstaller> .\AzureMigrateInstaller.ps1 ```
+
+5. Select from the scenario, cloud and connectivity options to deploy an appliance with the desired configuration. For instance, the selection shown below sets up an appliance to discover and assess **servers running in your Hyper-V environment** to an Azure Migrate project with **default _(public endpoint)_ connectivity** on **Azure Government cloud**.
+
+ :::image type="content" source="./media/deploy-appliance-script-government/script-hyperv-gov-inline.png" alt-text="Screenshot that shows how to set up appliance with desired configuration for Hyper-V." lightbox="./media/deploy-appliance-script-government/script-hyperv-gov-expanded.png":::
+
+6. The installer script does the following:
-To run the script:
+ - Installs agents and a web application.
+ - Install Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.
+ - Download and installs an IIS rewritable module.
+ - Updates a registry key (HKLM) with persistent setting details for Azure Migrate.
+ - Creates the following files under the path:
+ - **Config Files**: %Programdata%\Microsoft Azure\Config
+ - **Log Files**: %Programdata%\Microsoft Azure\Logs
-1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server running an existing Azure Migrate appliance.
-2. Launch PowerShell on the server, with administrator (elevated) privileges.
-3. Change the PowerShell directory to the folder containing the contents extracted from the downloaded zipped file.
-4. Run the script **AzureMigrateInstaller.ps1**, as follows:
+After the script has executed successfully, the appliance configuration manager will be launched automatically.
- ``` PS C:\Users\Administrators\Desktop\AzureMigrateInstaller-HyperV-USGov>.\AzureMigrateInstaller.ps1 ```
-1. After the script runs successfully, it launches the appliance web application so that you can set up the appliance. If you encounter any issues, review the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log.
+> [!NOTE]
+> If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
### Verify access
Make sure that the appliance can connect to Azure URLs for [government clouds](m
## Set up the appliance for physical servers
-To set up the appliance for VMware you download a zipped file from the Azure portal, and extract the contents. You run the PowerShell script to launch the appliance web app. You set up the appliance and configure it for the first time. Then, you register the appliance with the project.
+1. To set up the appliance, you download the zipped file named AzureMigrateInstaller.zip either from the portal or from [here](https://go.microsoft.com/fwlink/?linkid=2140334).
+1. Extract the contents on the server where you want to deploy the appliance.
+1. Execute the PowerShell script to launch the appliance configuration manager.
+1. Set up the appliance and configure it for the first time.
### Download the script 1. In **Migration Goals** > **Windows, Linux and SQL Servers** > **Azure Migrate: Discovery and assessment**, click **Discover**.
-2. In **Discover servers** > **Are your servers virtualized?**, select **Not virtualized/Other**.
+2. In **Discover servers** > **Are your servers virtualized?**, select **Physical or other (AWS, GCP, Xen etc.)**.
3. Click **Download**, to download the zipped file.
-### Verify file security
+### Verify security
Check that the zipped file is secure, before you deploy it.
-1. On the servers to which you downloaded the file, open an administrator command window.
-2. Run the following command to generate the hash for the zipped file
+1. On the server to which you downloaded the file, open an administrator command window.
+2. Run the following command to generate the hash for the zipped file:
- ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]```
- - Example: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller-Server-USGov.zip SHA256```
+ - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ```
+3. Verify the latest appliance version and hash value:
-3. Verify the latest appliance version and hash value:
+ **Download** | **Hash value**
+ |
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2140338) | 15a94b637a39c53ac91a2d8b21cc3cca8905187e4d9fb4d895f4fa6fd2f30b9f
- **Scenario** | **Download*** | **Hash value**
- | |
- Physical (85 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140338) | cfed44bb52c9ab3024a628dc7a5d0df8c624f156ec1ecc3507116bae330b257f
-
+> [!NOTE]
+> The same script can be used to set up Physical appliance for Azure Government cloud with either public or private endpoint connectivity.
### Run the script
-Here's what the script does:
+1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
+2. Launch PowerShell on the above server with administrative (elevated) privilege.
+3. Change the PowerShell directory to the folder where the contents have been extracted from the downloaded zipped file.
+4. Run the script named **AzureMigrateInstaller.ps1** by running the following command:
-- Installs agents and a web application.-- Installs Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.-- Downloads and installs an IIS rewritable module.-- Updates a registry key (HKLM), with persistent settings for Azure Migrate.-- Creates log and configuration files as follows:
- - **Config Files**: %ProgramData%\Microsoft Azure\Config
- - **Log Files**: %ProgramData%\Microsoft Azure\Logs
-
-To run the script:
-
-1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server running an existing Azure Migrate appliance.
-2. Launch PowerShell on the server, with administrator (elevated) privileges.
-3. Change the PowerShell directory to the folder containing the contents extracted from the downloaded zipped file.
-4. Run the script **AzureMigrateInstaller.ps1**, as follows:
-
- ``` PS C:\Users\Administrators\Desktop\AzureMigrateInstaller-Server-USGov>.\AzureMigrateInstaller.ps1 ```
-1. After the script runs successfully, it launches the appliance web application so that you can set up the appliance. If you encounter any issues, review the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log.
+
+ ``` PS C:\Users\administrator\Desktop\AzureMigrateInstaller> .\AzureMigrateInstaller.ps1 ```
+
+5. Select from the scenario, cloud and connectivity options to deploy an appliance with the desired configuration. For instance, the selection shown below sets up an appliance to discover and assess **physical servers** _(or servers running on other clouds like AWS, GCP, Xen etc.)_ to an Azure Migrate project with **default _(public endpoint)_ connectivity** on **Azure Government cloud**.
+
+ :::image type="content" source="./media/deploy-appliance-script-government/script-physical-gov-inline.png" alt-text="Screenshot that shows how to set up appliance with desired configuration for Physical servers." lightbox="./media/deploy-appliance-script-government/script-physical-gov-expanded.png":::
+
+6. The installer script does the following:
+
+ - Installs agents and a web application.
+ - Install Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.
+ - Download and installs an IIS rewritable module.
+ - Updates a registry key (HKLM) with persistent setting details for Azure Migrate.
+ - Creates the following files under the path:
+ - **Config Files**: %Programdata%\Microsoft Azure\Config
+ - **Log Files**: %Programdata%\Microsoft Azure\Logs
+
+After the script has executed successfully, the appliance configuration manager will be launched automatically.
+
+> [!NOTE]
+> If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
### Verify access
migrate Deploy Appliance Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/deploy-appliance-script.md
Title: Set up an Azure Migrate appliance with a script description: Learn how to set up an Azure Migrate appliance with a script --++ ms. Last updated 03/18/2021
Last updated 03/18/2021
# Set up an appliance with a script
-Follow this article to create an [Azure Migrate appliance](./migrate-appliance-architecture.md) for the assessment/migration of servers on VMware, and on Hyper-V. You run a script to create an appliance, and verify that it can connect to Azure.
+Follow this article to deploy an [Azure Migrate appliance](./migrate-appliance-architecture.md) using a PowerShell script for:
+- discovery, assessment and agentless replication of servers running in VMware environment
+- discovery and assessment of servers running in Hyper-V environment.
-You can deploy the appliance for servers on VMware and on Hyper-V using a script, or using a template that you download from the Azure portal. Using a script is useful if you're unable to create an appliance using the downloaded template.
+You can deploy the appliance for servers on VMware and on Hyper-V by either using a script, or using a template (OVA/VHD) that you download from the Azure portal. Using a script is useful if you're unable to create an appliance using the downloaded template.
-- To use a template, follow the tutorials for [VMware](./tutorial-discover-vmware.md) or [Hyper-V](./tutorial-discover-hyper-v.md).
+- To use a template, follow the tutorials for [VMware](./tutorial-discover-vmware.md) and [Hyper-V](./tutorial-discover-hyper-v.md).
- To set up an appliance for physical servers, you can only use a script. Follow [this article](how-to-set-up-appliance-physical.md).-- To set up an appliance in an Azure Government cloud, follow [this article](deploy-appliance-script-government.md).
+- To set up an appliance in an Azure Government cloud, you can only use a script. Follow [this article](deploy-appliance-script-government.md).
## Prerequisites
-The script sets up the Azure Migrate appliance on an existing server.
+You can use the script to deploy the Azure Migrate appliance on an existing server in your VMware or Hyper-V environment.
-- The server that will act as the appliance must meet the following hardware and OS requirements:
+- The server that hosts the appliance must meet the following hardware and OS requirements:
Scenario | Requirements |
-VMware | Windows Server 2016, with 32 GB of memory, eight vCPUs, around 80 GB of disk storage
-Hyper-V | Windows Server 2016, with 16 GB of memory, eight vCPUs, around 80 GB of disk storage
+VMware | Windows Server 2016, with 32 GB of memory, eight vCPUs, around 80 GB of disk storage.
+Hyper-V | Windows Server 2016, with 16 GB of memory, eight vCPUs, around 80 GB of disk storage.
- The server also needs an external virtual switch. It requires a static or dynamic IP address. -- Before you deploy the appliance, review detailed appliance requirements for [servers on VMware](migrate-appliance.md#appliancevmware), [on Hyper-V](migrate-appliance.md#appliancehyper-v).-- Don't run the script on an existing Azure Migrate appliance.
+- Before you deploy the appliance, review detailed appliance requirements for [VMware](migrate-appliance.md#appliancevmware) and [Hyper-V](migrate-appliance.md#appliancehyper-v).
+- If you run the script on a server with Azure Migrate appliance already set up, you can choose to clean up the existing configuration and set up a fresh appliance of the desired configuration. When you execute the script, you will get a notification as shown below:
+
+ ![Setting up appliance with desired configuration](./media/deploy-appliance-script/script-reconfigure-appliance.png)
## Set up the appliance for VMware
-To set up the appliance for VMware you download the zipped file named AzureMigrateInstaller-Server-Public.zip either from the portal or from [here](https://go.microsoft.com/fwlink/?linkid=2140334), and extract the contents. You run the PowerShell script to launch the appliance web app. You set up the appliance and configure it for the first time. Then, you register the appliance with project.
+1. To set up the appliance, you download the zipped file named AzureMigrateInstaller.zip either from the portal or from [here](https://go.microsoft.com/fwlink/?linkid=2116601).
+1. Extract the contents on the server where you want to deploy the appliance.
+1. Execute the PowerShell script to launch the appliance configuration manager.
+1. Set up the appliance and configure it for the first time.
-### Verify file security
+### Verify security
Check that the zipped file is secure, before you deploy it. 1. On the server to which you downloaded the file, open an administrator command window.
-2. Run the following command to generate the hash for the zipped file
+2. Run the following command to generate the hash for the zipped file:
- ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]```
- - Example: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller-VMware-Public.zip SHA256```
-3. Verify the latest appliance version and script for Azure public cloud:
+ - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ```
+3. Verify the latest appliance version and hash value:
+
+ **Download** | **Hash value**
+ |
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2116601) | 15a94b637a39c53ac91a2d8b21cc3cca8905187e4d9fb4d895f4fa6fd2f30b9f
- **Algorithm** | **Download** | **SHA256**
- | |
- VMware (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2116601) | 85b74d93dfcee43412386141808d82147916330e6669df94c7969fe1b3d0fe72
+> [!NOTE]
+> The same script can be used to set up VMware appliance for either Azure public or Azure Government cloud with public or private endpoint connectivity.
### Run the script
-Here's what the script does:
+1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
+2. Launch PowerShell on the above server with administrative (elevated) privilege.
+3. Change the PowerShell directory to the folder where the contents have been extracted from the downloaded zipped file.
+4. Run the script named **AzureMigrateInstaller.ps1** by running the following command:
+
+
+ ``` PS C:\Users\administrator\Desktop\AzureMigrateInstaller> .\AzureMigrateInstaller.ps1 ```
+
+5. Select from the scenario, cloud and connectivity options to deploy an appliance with the desired configuration. For instance, the selection shown below sets up an appliance to discover, assess and migrate **servers running in your VMware environment** to an Azure Migrate project with **default _(public endpoint)_ connectivity** on **Azure public cloud**.
+
+ :::image type="content" source="./media/deploy-appliance-script/script-vmware-default-inline.png" alt-text="Screenshot that shows how to set up Vmware appliance with desired configuration." lightbox="./media/deploy-appliance-script/script-vmware-default-expanded.png":::
-- Installs agents and a web application.-- Installs Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.-- Downloads and installs an IIS rewritable module.-- Updates a registry key (HKLM), with persistent settings for Azure Migrate.-- Creates log and configuration files as follows:
- - **Config Files**: %ProgramData%\Microsoft Azure\Config
- - **Log Files**: %ProgramData%\Microsoft Azure\Logs
+6. The installer script does the following:
-To run the script:
+ - Installs agents and a web application.
+ - Install Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.
+ - Download and installs an IIS rewritable module.
+ - Updates a registry key (HKLM) with persistent setting details for Azure Migrate.
+ - Creates the following files under the path:
+ - **Config Files**: %Programdata%\Microsoft Azure\Config
+ - **Log Files**: %Programdata%\Microsoft Azure\Logs
-1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on an existing Azure Migrate appliance.
-2. Launch PowerShell on the server, with administrator (elevated) privileges.
-3. Change the PowerShell directory to the folder containing the contents extracted from the downloaded zipped file.
-4. Run the script **AzureMigrateInstaller.ps1**, as follows:
+After the script has executed successfully, the appliance configuration manager will be launched automatically.
- ``` PS C:\Users\administrator\Desktop\AzureMigrateInstaller-Server-Public> .\AzureMigrateInstaller.ps1 -scenario VMware ```
-
-5. After the script runs successfully, it launches the appliance web application so that you can set up the appliance. If you encounter any issues, review the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log.
+> [!NOTE]
+> If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
### Verify access
Make sure that the appliance can connect to Azure URLs for the [public](migrate-
## Set up the appliance for Hyper-V
-To set up the appliance for Hyper-V you download the zipped file named AzureMigrateInstaller-Server-Public.zip either from the portal or from [here](https://go.microsoft.com/fwlink/?linkid=2105112), and extract the contents. You run the PowerShell script to launch the appliance web app. You set up the appliance and configure it for the first time. Then, you register the appliance with project.
+1. To set up the appliance, you download the zipped file named AzureMigrateInstaller.zip either from the portal or from [here](https://go.microsoft.com/fwlink/?linkid=2116657).
+1. Extract the contents on the server where you want to deploy the appliance.
+1. Execute the PowerShell script to launch the appliance configuration manager.
+1. Set up the appliance and configure it for the first time.
-
-### Verify file security
+### Verify security
Check that the zipped file is secure, before you deploy it. 1. On the server to which you downloaded the file, open an administrator command window.
-2. Run the following command to generate the hash for the zipped file
+2. Run the following command to generate the hash for the zipped file:
- ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]```
- - Example: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller-Server-HyperV.zip SHA256```
+ - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ```
+3. Verify the latest appliance version and hash value:
-3. Verify the latest appliance version and script for Azure public cloud:
+ **Download** | **Hash value**
+ |
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2116657) | 15a94b637a39c53ac91a2d8b21cc3cca8905187e4d9fb4d895f4fa6fd2f30b9f
- **Scenario** | **Download** | **SHA256**
- | |
- Hyper-V (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2116657) | 9bbef62e2e22481eda4b77c7fdf05db98c3767c20f0a873114fb0dcfa6ed682a
+> [!NOTE]
+> The same script can be used to set up Hyper-V appliance for either Azure public or Azure Government cloud with public or private endpoint connectivity.
### Run the script
-Here's what the script does:
+1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
+2. Launch PowerShell on the above server with administrative (elevated) privilege.
+3. Change the PowerShell directory to the folder where the contents have been extracted from the downloaded zipped file.
+4. Run the script named **AzureMigrateInstaller.ps1** by running the following command:
+
+
+ ``` PS C:\Users\administrator\Desktop\AzureMigrateInstaller> .\AzureMigrateInstaller.ps1 ```
+
+5. Select from the scenario, cloud and connectivity options to deploy an appliance with the desired configuration. For instance, the selection shown below sets up an appliance to discover and assess **servers running in your Hyper-V environment** to an Azure Migrate project with **default _(public endpoint)_ connectivity** on **Azure public cloud**.
+
+ :::image type="content" source="./media/deploy-appliance-script/script-hyperv-default-inline.png" alt-text="Screenshot that shows how to set up Hyper-V appliance with desired configuration." lightbox="./media/deploy-appliance-script/script-hyperv-default-expanded.png":::
-- Installs agents and a web application.-- Installs Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.-- Downloads and installs an IIS rewritable module.-- Updates a registry key (HKLM), with persistent settings for Azure Migrate.-- Creates log and configuration files as follows:
- - **Config Files**: %ProgramData%\Microsoft Azure\Config
- - **Log Files**: %ProgramData%\Microsoft Azure\Logs
+6. The installer script does the following:
-To run the script:
+ - Installs agents and a web application.
+ - Install Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.
+ - Download and installs an IIS rewritable module.
+ - Updates a registry key (HKLM) with persistent setting details for Azure Migrate.
+ - Creates the following files under the path:
+ - **Config Files**: %Programdata%\Microsoft Azure\Config
+ - **Log Files**: %Programdata%\Microsoft Azure\Logs
-1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on an existing Azure Migrate appliance.
-2. Launch PowerShell on the server, with administrator (elevated) privileges.
-3. Change the PowerShell directory to the folder containing the contents extracted from the downloaded zipped file.
-4. Run the script **AzureMigrateInstaller.ps1**, as follows:
+After the script has executed successfully, the appliance configuration manager will be launched automatically.
- ``` PS C:\Users\administrator\Desktop\AzureMigrateInstaller-Server-Public> .\AzureMigrateInstaller.ps1 -scenario Hyperv ```
-
-5. After the script runs successfully, it launches the appliance web application so that you can set up the appliance. If you encounter any issues, review the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log.
+> [!NOTE]
+> If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
### Verify access
migrate How To Scale Out For Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-scale-out-for-migration.md
To add a scale-out appliance, follow the steps mentioned below:
### 2. Download the installer for the scale-out appliance In **Download Azure Migrate appliance**, click **Download**. You need to download the PowerShell installer script to deploy the scale-out appliance on an existing server running Windows Server 2016 and with the required hardware configuration (32-GB RAM, 8 vCPUs, around 80 GB of disk storage and internet access, either directly or through a proxy).+ :::image type="content" source="./media/how-to-scale-out-for-migration/download-scale-out.png" alt-text="Download script for scale-out appliance"::: > [!TIP] > You can validate the checksum of the downloaded zip file using these steps: >
-> 1. Open command prompt as an administrator
+> 1. On the server to which you downloaded the file, open an administrator command window.
> 2. Run the following command to generate the hash for the zipped file: - ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]```
- - Example usage for public cloud: ```C:\>Get-FileHash -Path .\AzureMigrateInstaller-VMware-Public-Scaleout.zip -Algorithm SHA256 ```
+ - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ```
> 3. Download the latest version of the scale-out appliance installer from the portal if the computed hash value doesn't match this string:
-1E6B6E3EE8B2A800818B925F5DA67EF7874DAD87E32847120B32F3E21F5960F9
+15a94b637a39c53ac91a2d8b21cc3cca8905187e4d9fb4d895f4fa6fd2f30b9f
### 3. Run the Azure Migrate installer script
-The installer script does the following:
-- Installs gateway agent and appliance configuration manager to perform more concurrent server replications.-- Install Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.-- Download and installs an IIS rewritable module.-- Updates a registry key (HKLM) with persistent setting details for Azure Migrate.-- Creates the following files under the path:
- - **Config Files**: %Programdata%\Microsoft Azure\Config
- - **Log Files**: %Programdata%\Microsoft Azure\Logs
+1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
+2. Launch PowerShell on the above server with administrative (elevated) privilege.
+3. Change the PowerShell directory to the folder where the contents have been extracted from the downloaded zipped file.
+4. Run the script named **AzureMigrateInstaller.ps1** by running the following command:
-Run the script as follows:
+ ``` PS C:\Users\administrator\Desktop\AzureMigrateInstaller> .\AzureMigrateInstaller.ps1 ```
-1. Extract the zip file to a folder on the server that will host the scale-out appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
-2. Launch PowerShell on the above server with administrative (elevated) privilege.
-3. Change the PowerShell directory to the folder where the contents have been extracted from the downloaded zip file.
-4. Run the script named **AzureMigrateInstaller.ps1** using the following command:
+5. Select from the scenario, cloud, configuration and connectivity options to deploy the desired appliance. For instance, the selection shown below sets up a **scale-out appliance** to initiate concurrent replications on servers running in your VMware environment to an Azure Migrate project with **default _(public endpoint)_ connectivity** on **Azure public cloud**.
- - For the public cloud:
-
- ``` PS C:\Users\administrator\Desktop\AzureMigrateInstaller-Server-Public> .\AzureMigrateInstaller.ps1 ```
+ :::image type="content" source="./media/how-to-scale-out-for-migration/script-vmware-scaleout-inline.png" alt-text="Screenshot that shows how to set up scale-out appliance." lightbox="./media/how-to-scale-out-for-migration/script-vmware-scaleout-expanded.png":::
- The script will launch the appliance configuration manager when it completes the execution.
+6. The installer script does the following:
-If you come across any issues, you can access the script logs at: <br/> C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
+ - Installs gateway agent and appliance configuration manager to perform more concurrent server replications.
+ - Install Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.
+ - Download and installs an IIS rewritable module.
+ - Updates a registry key (HKLM) with persistent setting details for Azure Migrate.
+ - Creates the following files under the path:
+ - **Config Files**: %Programdata%\Microsoft Azure\Config
+ - **Log Files**: %Programdata%\Microsoft Azure\Logs
+
+After the script has executed successfully, the appliance configuration manager will be launched automatically.
+
+> [!NOTE]
+> If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
### 4. Configure the appliance
Before you begin ensure that the [these Azure endpoints](migrate-appliance.md#pu
1. Paste the **Azure Migrate project key** copied from the portal. If you do not have the key, go to **Server Assessment> Discover> Manage existing appliances**, select the primary appliance name, find the scale-out appliance associated with it and copy the corresponding key. 1. You will need a device code to authenticate with Azure. Clicking on **Login** will open a modal with the device code as shown below.+
+ :::image type="content" source="./media/tutorial-discover-vmware/device-code.png" alt-text="Modal showing the device code":::
1. Click on **Copy code & Login** to copy the device code and open an Azure Login prompt in a new browser tab. If it doesn't appear, make sure you've disabled the pop-up blocker in the browser. 1. On the new tab, paste the device code and sign-in by using your Azure username and password.
migrate How To Set Up Appliance Physical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-set-up-appliance-physical.md
Title: Set up an Azure Migrate appliance for physical servers description: Learn how to set up an Azure Migrate appliance for physical server discovery and assessment.--++ ms. Last updated 03/13/2021
The Azure Migrate appliance is a lightweight appliance, used by Azure Migrate: D
To set up the appliance you: -- Provide an appliance name and generate a project key in the portal.-- Download a zipped file with Azure Migrate installer script from the Azure portal.-- Extract the contents from the zipped file. Launch the PowerShell console with administrative privileges.-- Execute the PowerShell script to launch the appliance web application.-- Configure the appliance for the first time, and register it with the project using the project key.
+1. Provide an appliance name and generate a project key in the portal.
+2. Download a zipped file with Azure Migrate installer script from the Azure portal.
+3. Extract the contents from the zipped file. Launch the PowerShell console with administrative privileges.
+4. Execute the PowerShell script to launch the appliance configuration manager.
+5. Configure the appliance for the first time, and register it with the project using the project key.
### Generate the project key
To set up the appliance you:
1. After the successful creation of the Azure resources, a **project key** is generated. 1. Copy the key as you will need it to complete the registration of the appliance during its configuration.
+ ![Selections for Generate Key](./media/tutorial-assess-physical/generate-key-physical-1.png)
+ ### Download the installer script In **2: Download Azure Migrate appliance**, click on **Download**.
- ![Selections for Discover machines](./media/tutorial-assess-physical/servers-discover.png)
--
- ![Selections for Generate Key](./media/tutorial-assess-physical/generate-key-physical.png)
- ### Verify security Check that the zipped file is secure, before you deploy it.
-1. On the servers to which you downloaded the file, open an administrator command window.
+1. On the server to which you downloaded the file, open an administrator command window.
2. Run the following command to generate the hash for the zipped file: - ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]```
- - Example usage for public cloud: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller-Server-Public.zip SHA256 ```
- - Example usage for government cloud: ``` C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller-Server-USGov.zip MD5 ```
-3. Verify the latest version of the appliance, and [hash values](tutorial-discover-physical.md#verify-security) settings.
-
+ - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ```
+3. Verify the latest appliance version and hash value:
-## Run the Azure Migrate installer script
-The installer script does the following:
+ **Download** | **Hash value**
+ |
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2140334) | 15a94b637a39c53ac91a2d8b21cc3cca8905187e4d9fb4d895f4fa6fd2f30b9f
-- Installs agents and a web application for physical server discovery and assessment.-- Install Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.-- Download and installs an IIS rewritable module.-- Updates a registry key (HKLM) with persistent setting details for Azure Migrate.-- Creates the following files under the path:
- - **Config Files**: %Programdata%\Microsoft Azure\Config
- - **Log Files**: %Programdata%\Microsoft Azure\Logs
+> [!NOTE]
+> The same script can be used to set up Physical appliance for either Azure public or Azure Government cloud with public or private endpoint connectivity.
-Run the script as follows:
+### Run the Azure Migrate installer script
-1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server having an existing Azure Migrate appliance.
+1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
2. Launch PowerShell on the above server with administrative (elevated) privilege. 3. Change the PowerShell directory to the folder where the contents have been extracted from the downloaded zipped file. 4. Run the script named **AzureMigrateInstaller.ps1** by running the following command:
- - For the public cloud:
- ``` PS C:\Users\administrator\Desktop\AzureMigrateInstaller-Server-Public> .\AzureMigrateInstaller.ps1 ```
- - For Azure Government:
-
- ``` PS C:\Users\Administrators\Desktop\AzureMigrateInstaller-Server-USGov>.\AzureMigrateInstaller.ps1 ```
+ ``` PS C:\Users\administrator\Desktop\AzureMigrateInstaller> .\AzureMigrateInstaller.ps1 ```
+
+5. Select from the scenario, cloud and connectivity options to deploy an appliance with the desired configuration. For instance, the selection shown below sets up an appliance to discover and assess **physical servers** _(or servers running on other clouds like AWS, GCP, Xen etc.)_ to an Azure Migrate project with **default _(public endpoint)_ connectivity** on **Azure public cloud**.
+
+ :::image type="content" source="./media/tutorial-discover-physical/script-physical-default-1.png" alt-text="Screenshot that shows how to set up appliance with desired configuration.":::
- The script will launch the appliance web application when it finishes successfully.
+6. The installer script does the following:
-If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
+ - Installs agents and a web application.
+ - Install Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.
+ - Download and installs an IIS rewritable module.
+ - Updates a registry key (HKLM) with persistent setting details for Azure Migrate.
+ - Creates the following files under the path:
+ - **Config Files**: %Programdata%\Microsoft Azure\Config
+ - **Log Files**: %Programdata%\Microsoft Azure\Logs
+After the script has executed successfully, the appliance configuration manager will be launched automatically.
+> [!NOTE]
+> If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
### Verify appliance access to Azure
migrate How To Use Azure Migrate With Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/how-to-use-azure-migrate-with-private-endpoints.md
To set up the appliance:
1. After you download the zipped file, verify the file security. 1. Run the installer script to deploy the appliance.
-Here are the download links for each of the scenarios.
-
-Scenario | Download link | Hash value
- | |
-Hyper-V | [AzureMigrateInstaller-HyperV-Public-PrivateLink.zip](https://go.microsoft.com/fwlink/?linkid=2160557) | CBF8927AF137A106E2A34AC4F77CFFCB1CD96873C592E1DF37BC5606254989EC
-Physical | [AzureMigrateInstaller-Physical-Public-PrivateLink.zip](https://go.microsoft.com/fwlink/?linkid=2160558) | 1CB967D92096EB48E4C3C809097F52C8341FC7CA7607CF840C529E7A21B1A21D
-VMware | [AzureMigrateInstaller-VMware-public-PrivateLink.zip](https://go.microsoft.com/fwlink/?linkid=2160648) | 0A4FCC4D1500442C5EB35E4095EF781CB17E8ECFE8E4F8C859E65231E00BB154
-VMware scale-out | [AzureMigrateInstaller-VMware-Public-Scaleout-PrivateLink.zip](https://go.microsoft.com/fwlink/?linkid=2160811) | 2F035D34E982EE507EAEC59148FDA8327A45D2A845B4A95475EC6D2469D72D28
- #### Verify security
-Check that the zipped file is secure before you deploy it.
-
-1. Open an administrator command window on the server to which you downloaded the file.
-1. To generate the hash for the zipped file, run the following command:
+Check that the zipped file is secure, before you deploy it.
+1. On the server to which you downloaded the file, open an administrator command window.
+2. Run the following command to generate the hash for the zipped file:
- ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]```
- - Example usage for public cloud: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller-VMware-public-PrivateLink.zip SHA256 ```
+ - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ```
+3. Verify the latest appliance version and hash value:
-1. Verify the latest version of the appliance by comparing the hash values from the preceding table.
+ **Download** | **Hash value**
+ |
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2160648) | 15a94b637a39c53ac91a2d8b21cc3cca8905187e4d9fb4d895f4fa6fd2f30b9f
+
+> [!NOTE]
+> The same script can be used to set up an appliance with private endpoint connectivity for any of the chosen scenarios, such as VMware, Hyper-V, physical or other by choosing from the scenario and cloud options to deploy an appliance with the desired configuration.
Make sure the server meets the [hardware requirements](./migrate-appliance.md) for the chosen scenario, such as VMware, Hyper-V, physical or other, and can connect to the [required URLs](./migrate-appliance.md#public-cloud-urls-for-private-link-connectivity).
-#### Run the script
+#### Run the Azure Migrate installer script
+
+1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance.
+2. Launch PowerShell on the above server with administrative (elevated) privilege.
+3. Change the PowerShell directory to the folder where the contents have been extracted from the downloaded zipped file.
+4. Run the script named **AzureMigrateInstaller.ps1** by running the following command:
+
+
+ ``` PS C:\Users\administrator\Desktop\AzureMigrateInstaller> .\AzureMigrateInstaller.ps1 ```
+
+5. Select from the scenario, cloud and connectivity options to deploy an appliance with the desired configuration. For instance, the selection shown below sets up an appliance to discover and assess **servers running in your VMware environment** to an Azure Migrate project with **private endpoint connectivity** on **Azure public cloud**.
-1. Extract the zipped file to a folder on the server that will host the appliance.
-1. Open PowerShell on the machine, with administrator (elevated) privileges.
-1. Change the PowerShell directory to the folder that contains the contents extracted from the downloaded zipped file.
-1. Run the script **AzureMigrateInstaller.ps1**, as follows:
+ :::image type="content" source="./media/how-to-use-azure-migrate-with-private-endpoints/script-vmware-private-inline.png" alt-text="Screenshot that shows how to set up appliance with desired configuration for private endpoint." lightbox="./media/how-to-use-azure-migrate-with-private-endpoints/script-vmware-private-expanded.png":::
- ```
- PS C:\Users\administrator\Desktop\AzureMigrateInstaller-VMware-public-PrivateLink> .\AzureMigrateInstaller.ps1
- ```
+After the script has executed successfully, the appliance configuration manager will be launched automatically.
-1. After the script runs successfully, it launches the appliance configuration manager so that you can configure the appliance. If you come across any issues, review the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log.
+> [!NOTE]
+> If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
### Configure the appliance and start continuous discovery
Open a browser on any machine that can connect to the appliance server. Open the
- Select **Set up proxy** to specify the proxy address `http://ProxyIPAddress` or `http://ProxyFQDN` and listening port. - Specify credentials if the proxy needs authentication. Only HTTP proxy is supported. - You can add a list of URLs or IP addresses that should bypass the proxy server.
+ ![Adding to bypass proxy list](./media/how-to-use-azure-migrate-with-private-endpoints/bypass-proxy-list.png)
- Select **Save** to register the configuration if you've updated the proxy server details or added URLs or IP addresses to bypass proxy. > [!Note]
migrate Migrate Support Matrix Physical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-physical.md
Title: Support for physical discovery and assessment in Azure Migrate description: Learn about support for physical discovery and assessment with Azure Migrate Discovery and assessment--++ ms. Last updated 03/18/2021
To assess physical servers, you create a project, and add the Azure Migrate: Dis
**Permissions:** -- For Windows servers, use a domain account for domain-joined servers, and a local account for servers that are not domain-joined. The user account should be added to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users.
+Set up an account that the appliance can use to access the physical servers.
+
+**Windows servers**
+
+- For Windows servers, use a domain account for domain-joined servers, and a local account for servers that are not domain-joined.
+- The user account should be added to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users.
+- If Remote management Users group isn't present, then add user account to the group: **WinRMRemoteWMIUsers_**.
+- The account needs these permissions for appliance to create a CIM connection with the server and pull the required configuration and performance metadata from the WMI classes listed here.
+- In some cases, adding the account to these groups may not return the required data from WMI classes as the account might be filtered by [UAC](/windows/win32/wmisdk/user-account-control-and-wmi). To overcome the UAC filtering, user account needs to have necessary permissions on CIMV2 Namespace and sub-namespaces on the target server. You can follow the steps [here](troubleshoot-appliance.md) to enable the required permissions.
+ > [!Note]
- > For Windows Server 2008 and 2008 R2, ensure that WMF 3.0 is installed on the servers and the domain/local account used to access the servers is added to these groups: Performance Monitor Users, Performance Log Users and WinRMRemoteWMIUsers.
-- For Linux servers, you need a root account on the Linux servers that you want to discover. Alternately, you can set a non-root account with the required capabilities using the following commands:
+ > For Windows Server 2008 and 2008 R2, ensure that WMF 3.0 is installed on the servers.
+
+**Linux servers**
+
+- You need a root account on the servers that you want to discover. Alternately, you can provide a user account with sudo permissions.
+- The support to add a user account with sudo access is provided by default with the new appliance installer script downloaded from portal after July 20,2021.
+- For older appliances, you can enable the capability by following these steps:
+ 1. On the server running the appliance, open the Registry Editor.
+ 1. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance.
+ 1. Create a registry key ΓÇÿisSudoΓÇÖ with DWORD value of 1.
+
+ :::image type="content" source="./media/tutorial-discover-physical/issudo-reg-key.png" alt-text="Screenshot that shows how to enable sudo support.":::
+
+- To discover the configuration and performance metadata from target server, you need to enable sudo access for the commands listed [here](migrate-appliance.md#linux-server-metadata). Make sure that you have enabled 'NOPASSWD' for the account to run the required commands without prompting for a password every time sudo command is invoked.
+- The following Linux OS distributions are supported for discovery by Azure Migrate using an account with sudo access:
+
+ Operating system | Versions
+ |
+ Red Hat Enterprise Linux | 6,7,8
+ Cent OS | 6.6, 8.2
+ Ubuntu | 14.04,16.04,18.04
+ SUSE Linux | 11.4, 12.4
+ Debian | 7, 10
+ Amazon Linux | 2.0.2021
+ CoreOS Container | 2345.3.0
+
+- If you cannot provide root account or user account with sudo access, then you can set 'isSudo' registry key to value '0' in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance registry and provide a non-root account with the required capabilities using the following commands:
**Command** | **Purpose** | |
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/migrate-support-matrix-vmware.md
Title: VMware server assessment support in Azure Migrate description: Learn about Azure Migrate discovery and assessment support for servers in a VMware environment.--++ ms. Last updated 03/17/2021
Learn more about [assessments](concepts-assessment-calculation.md).
VMware | Details |
-**vCenter Server** | Servers that you want to discover and assess must be managed by vCenter Server version 7.0, 6.7, 6.5, 6.0, or 5.5.<br /><br /> Discovering servers by providing ESXi host details in the appliance currently isn't supported.
+**vCenter Server** | Servers that you want to discover and assess must be managed by vCenter Server version 7.0, 6.7, 6.5, 6.0, or 5.5.<br /><br /> Discovering servers by providing ESXi host details in the appliance currently isn't supported. <br /><br /> IPv6 addresses are not supported for vCenter Server (for discovery and assessment of servers) and ESXi hosts (for replication of servers).
**Permissions** | The Azure Migrate: Discovery and assessment tool requires a vCenter Server read-only account.<br /><br /> If you want to use the tool for software inventory and agentless dependency analysis, the account must have privileges for guest operations on VMware VMs. ## Server requirements
Support | Details
**Supported SQL services** | Only SQL Server Database Engine is supported. <br /><br /> Discovery of SQL Server Reporting Services (SSRS), SQL Server Integration Services (SSIS), and SQL Server Analysis Services (SSAS) is not supported. > [!NOTE]
-> Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances when the [TrustServerCertificate](/dotnet/api/system.data.sqlclient.sqlconnectionstringbuilder.trustservercertificate) property is set to `true`. The transport layer uses SSL to encrypt the channel and bypass the certificate chain to validate trust. The appliance server must be set up to [trust the certificate's root authority](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine).
->
-> If no certificate has been provisioned on the server when it starts, SQL Server generates a self-signed certificate that's used to encrypt login packets. [Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine).
+> By default, Azure Migrate uses the most secure way of connecting to SQL instances i.e. Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the TrustServerCertificate property to `true`. Additionally, the transport layer uses SSL to encrypt the channel and bypass the certificate chain to validate trust. Hence, the appliance server must be set up to trust the certificate's root authority.
>
+> However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance.[Learn more](https://go.microsoft.com/fwlink/?linkid=2158046) to understand what to choose.
+ ## Dependency analysis requirements (agentless)
migrate Troubleshoot Appliance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/troubleshoot-appliance.md
Title: Troubleshoot Azure Migrate appliance deployment and discovery description: Get help with appliance deployment and server discovery.--++ ms. Last updated 07/01/2020
This article helps you troubleshoot issues when deploying the [Azure Migrate](mi
[Review](migrate-appliance.md) the appliance support requirements.
-## "Invalid OVF manifest entry"
+## "Invalid OVF manifest entry" during appliance set up
-If you receive the error "The provided manifest file is invalid: Invalid OVF manifest entry", do the following:
+**Error**
+
+You are getting the error "The provided manifest file is invalid: Invalid OVF manifest entry" when setting up an appliance using OVA template.
+
+**Remediation**
1. Verify that the Azure Migrate appliance OVA file is downloaded correctly by checking its hash value. [Learn more](./tutorial-discover-vmware.md). If the hash value doesn't match, download the OVA file again and retry the deployment. 2. If deployment still fails, and you're using the VMware vSphere client to deploy the OVF file, try deploying it through the vSphere web client. If deployment still fails, try using a different web browser.
If you receive the error "The provided manifest file is invalid: Invalid OVF man
- In **Home** > **Inventory**, select **File** > **Deploy OVF template**. Browse to the OVA and complete the deployment. 4. If the deployment still fails, contact Azure Migrate support.
-## Can't connect to the internet
+## Connectivity check failing during 'Set up prerequisites'
+
+**Error**
+
+You are getting an error in the connectivity check on the appliance.
+
+**Remediation**
+
+1. Ensure that you can connect to the required [URLs](/azure/migrate/migrate-appliance#url-access) from the appliance
+1. Check if there is a proxy or firewall blocking access to these URLs. If you are required to create an allowlist, make sure that you include all of the URLs.
+1. If there is a proxy server configured on-premises, make sure that you provide the proxy details correctly by selecting **Setup proxy** in the same step. Make sure that you provide the authorization credentials if the proxy needs them.
+1. Ensure that the server has not been previously used to set up the [replication appliance](/azure/migrate/migrate-replication-appliance) or that you have the mobility service agent installed on the server.
+
+## Connectivity check failing for aka.ms URL during 'Set up prerequisites'
+
+**Error**
+
+You are getting an error in the connectivity check on the appliance for aka.ms URL.
+
+**Remediation**
+
+1. Ensure that you have connectivity to internet and have allowlisted the URL-aka.ms/* to download the latest versions of the services.
+2. Check if there is a proxy/firewall blocking access to this URL. Ensure that you have provided the proxy details correctly in the prerequisites step of the configuration manager.
+3. You can go back to the appliance configuration manager and rerun prerequisites to initiate auto-update.
+3. If retry doesn't help, you can download the *latestcomponents.json* file from [here](https://aka.ms/latestapplianceservices) to check the latest versions of the services that are failing and manually update them from the download links in the file.
+
+ If you have enabled the appliance for **private endpoint connectivity**, and don't want to allow access to this URL over internet, you can [disable auto-update](/azure/migrate/migrate-appliance#turn-off-auto-update), as the aka.ms link is required for this service.
+
+ >[!Note]
+ >If you disable auto-update service, the services running on the appliance will not get the latest updates automatically. To get around this, [update the appliance services manually](/azure/migrate/migrate-appliance#manually-update-an-older-version).
+
+## Auto Update check failing during 'Set up prerequisites'
+
+**Error**
+
+You are getting an error in the auto update check on the appliance.
+
+**Remediation**
+
+1. Make sure that you created an allowlist for the [required URLs](/azure/migrate/migrate-appliance#url-access) and that no proxy or firewall setting is blocking them.
+1. If the update of any appliance component is failing, either rerun the prerequisites or [manually update the appliance services](/azure/migrate/migrate-appliance#manually-update-an-older-version).
+
+## Time sync check failing during 'Set up prerequisites'
+
+**Error**
+
+An error about time synchronization indicates that the server clock might be out of synchronization with the current time by more than five minutes.
+
+**Remediation**
+
+- Ensure that the appliance server time is synchronized with the internet time by checking the date and time settings from control panel.
+- You can also change the clock time on the appliance server to match the current time by following these steps:
+ 1. Open an admin command prompt on the server.
+ 2. To check the time zone, run **w32tm /tz**.
+ 3. To synchronize the time, run **w32tm /resync**.
+
+## VDDK check failing during 'Set up prerequisites' on VMware appliance
+
+**Remediation**
+
+1. Ensure that you have downloaded VDDK kit 6.7 and have copied its files to- **C:\Program Files\VMware\VMware Virtual Disk Development Kit** on the appliance server.
+2. Ensure that no other software or application is using another version of the VDDK Kit on the appliance.
+
+## Getting project key related error during appliance registration
+
+**Error**
+You are having issues when you try to register the appliance using the Azure Migrate project key copied from the project.
+
+**Remediation**
+
+1. Ensure that you've copied the correct key from the project: On the **Azure Migrate: Discovery and Assessment** card in your project, select **Discover**, and then select **Manage Existing appliance** in step 1. Select the appliance name (for which you previously generated a key) from the drop-down menu and copy the corresponding key.
+2. Ensure that you're pasting the key to the appliance of the right **cloud type** (Public/US Gov) and **appliance type** (VMware/Hyper-V/Physical or other). Check at the top of appliance configuration manager to confirm the cloud and scenario type.
+
+## "Failed to connect to the Azure Migrate project" during appliance registration
+
+**Error**
+
+After a successful login with an Azure user account, the appliance registration step fails with the message, "Failed to connect to the Azure Migrate project. Check the error detail and follow the remediation steps by clicking Retry"**.
+
+This issue happens when the Azure user account that was used to log in from the appliance configuration manager is different from the user account that was used to generate the Azure Migrate project key on the portal.
+
+**Remediation**
+1. To complete the registration of the appliance, use the same Azure user account that generated the Azure Migrate project key on the portal OR
+2. Assign the required roles and [permissions](/azure/migrate/tutorial-prepare-vmware#prepare-azure) to the other Azure user account being used for appliance registration
+
+## "Azure Active Directory (AAD) operation failed with status Forbidden" during appliance registration
+
+**Error**
-This can happen if the appliance server is behind a proxy.
+You are unable to complete registration due to insufficient AAD privileges and get the error, "Azure Active Directory (AAD) operation failed with status Forbidden".
-- Make sure you provide the authorization credentials if the proxy needs them.-- If you're using a URL-based firewall proxy to control outbound connectivity, add [these URLs](migrate-appliance.md#url-access) to an allowlist.-- If you're using an intercepting proxy to connect to the internet, import the proxy certificate onto the appliance using [these steps](./migrate-appliance.md).
+**Remediation**
-## Can't sign into Azure from the appliance web app
+Ensure that you have the [required permissions](/azure/migrate/tutorial-prepare-vmware#prepare-azure) to create and manage AAD Applications in Azure. You should have the **Application Developer** role OR the user role with **User can register applications** allowed at the tenant level.
-The error "Sorry, but we're having trouble signing you in" appears if you're using the incorrect Azure account to sign into Azure. This error occurs for a couple of reasons:
+## "Forbidden to access Key Vault" during appliance registration
-- If you sign into the appliance web application for the public cloud, using user account credentials for the Government cloud portal.-- If you sign into the appliance web application for the government cloud using user account credentials for the private cloud portal.
+**Error**
-Ensure you're using the correct credentials.
+Azure Key Vault create or update operation failed for "{KeyVaultName}" due to the error: "{KeyVaultErrorMessage}".
-## Date/time synchronization error
+This usually happens when the Azure user account that was used to register the appliance is different from the account used to generate the Azure Migrate project key on the portal (that is, when the Key vault was created).
-An error about date and time synchronization (802) indicates that the server clock might be out of synchronization with the current time by more than five minutes. Change the clock time on the collector server to match the current time:
+**Remediation**
-1. Open an admin command prompt on the server.
-2. To check the time zone, run **w32tm /tz**.
-3. To synchronize the time, run **w32tm /resync**.
+1. Ensure that the currently logged in user account on the appliance has the required permissions on the Key Vault (mentioned in the error message). The user account needs permissions as mentioned [here](/azure/migrate/tutorial-discover-vmware#prepare-an-azure-user-account).
+2. Go to the Key Vault and ensure that your user account has an access policy with all the _Key, Secret and Certificate_ permissions assigned under Key vault Access Policy. Learn more [/azure/key-vault/general/assign-access-policy-portal]
+3. If you have enabled the appliance for **private endpoint connectivity**, ensure that the appliance is either hosted in the same VNet where the Key Vault has been created or it is connected to the Azure VNet (where Key Vault has been created) over a private link. Make sure that the Key Vault private link is resolvable from the appliance. Go to Azure Migrate: Discovery and assessment> Properties to find the details of private endpoints for resources like the Key Vault created during the Azure Migrate key creation. [Learn more](https://go.microsoft.com/fwlink/?linkid=2162447)
+4. If you have the required permissions and connectivity, re-try the registration on the appliance after some time.
-## "UnableToConnectToServer"
+## Unable to connect to vCenter Server during validation
+
+**Error**
If you get this connection error, you might be unable to connect to vCenter Server *Servername*.com:9443. The error details indicate that there's no endpoint listening at `https://\*servername*.com:9443/sdk` that can accept the message.
+**Remediation**
+ - Check whether you're running the latest version of the appliance. If you're not, upgrade the appliance to the [latest version](./migrate-appliance.md). - If the issue still occurs in the latest version, the appliance might be unable to resolve the specified vCenter Server name, or the specified port might be wrong. By default, if the port is not specified, the collector will try to connect to port number 443.
If you get this connection error, you might be unable to connect to vCenter Serv
3. Identify the correct port number to connect to vCenter Server. 4. Verify that vCenter Server is up and running.
-## Error 60052/60039: Appliance might not be registered
+## Server credentials (domain) failing validation on VMware appliance
+
+**Error**
+
+You are getting "Validation failed" for domain credentials added on VMware appliance to perform software inventory, agentless dependency analysis.
+
+**Remediation**
+
+1. Check that you have provided the correct domain name and credentials
+1. Ensure that the domain is reachable from the appliance to validate the credentials. The appliance may be having line of sight issues or the domain name may not be resolvable from the appliance server.
+1. You can select **Edit** to update the domain name or credentials, and select **Revalidate credentials** to validate the credentials again after some time
+
+## "Access is denied" when connecting to Hyper-V hosts or clusters during validation
+
+**Error**
+
+You are unable to validate the added Hyper-V host/cluster due to an error-"Access is denied".
-- Error 60052, "The appliance might not be registered successfully to the project" occurs if the Azure account used to register the appliance has insufficient permissions.
- - Make sure that the Azure user account used to register the appliance has at least Contributor permissions on the subscription.
- - [Learn more](./migrate-appliance.md#appliancevmware) about required Azure roles and permissions.
-- Error 60039, "The appliance might not be registered successfully to the project" can occur if registration fails because the project used to the register the appliance can't be found.
- - In the Azure portal and check whether the project exists in the resource group.
- - If the project doesn't exist, create a new project in your resource group and register the appliance again. [Learn how to](./create-manage-projects.md#create-a-project-for-the-first-time) create a new project.
+**Remediation**
-## Error 60030/60031: Key Vault management operation failed
+1. Ensure that you have met all the [prerequisites for the Hyper-V hosts](/azure/migrate/migrate-support-matrix-hyper-v#hyper-v-host-requirements).
+1. Check the steps [**here**](/azure/migrate/tutorial-discover-hyper-v#prepare-hyper-v-hosts) on how to prepare the Hyper-V hosts manually or using a provisioning PowerShell script.
-If you receive the error 60030 or 60031, "An Azure Key Vault management operation failed", do the following:
+## "The server does not support WS-Management Identify operations" during validation
-- Make sure the Azure user account used to register the appliance has at least Contributor permissions on the subscription.-- Make sure the account has access to the key vault specified in the error message, and then retry the operation.-- If the issue persists, contact Microsoft support.-- [Learn more](./migrate-appliance.md#appliancevmware) about the required Azure roles and permissions.
+**Error**
-## Error 60028: Discovery couldn't be initiated
+You are not able to validate Hyper-V clusters on the appliance due to the error: "The server does not support WS-Management Identify operations. Skip the TestConnection part of the request and try again."
-Error 60028: "Discovery couldn't be initiated because of an error. The operation failed for the specified list of hosts or clusters" indicates that discovery couldn't be started on the hosts listed in the error because of a problem in accessing or retrieving server information. The rest of the hosts were successfully added.
+**Remediation**
-- Add the hosts listed in the error again, using the **Add host** option.-- If there's a validation error, review the remediation guidance to fix the errors, and then try the **Save and start discovery** option again.
+This is usually seen when you have provided a proxy configuration on the appliance. The appliance connects to the clusters using the short name for the cluster nodes, even if you have provided the FQDN of the node. Add the short name for the cluster nodes to the bypass proxy list on the appliance, the issue gets resolved and validation of the Hyper-V cluster succeeds.
-## Error 60025: Azure AD operation failed
+## "Can't connect to host or cluster" during validation on Hyper-V appliance
-Error 60025: "An Azure AD operation failed. The error occurred while creating or updating the Azure AD application" occurs when the Azure user account used to initiate the discovery is different from the account used to register the appliance. Do one of the following:
+**Error**
-- Ensure that the user account initiating the discovery is same as the one used to register the appliance.-- Provide Azure Active Directory application access permissions to the user account for which the discovery operation is failing.-- Delete the resource group previously created for the project. Create another resource group to start again.-- [Learn more](./migrate-appliance.md#appliancevmware) about Azure Active Directory application permissions.
+"Can't connect to a host or cluster because the server name can't be resolved. WinRM error code: 0x803381B9" might occur if the Azure DNS service for the appliance can't resolve the cluster or host name you provided.
-## Error 50004: Can't connect to host or cluster
+This usually happens when you have added the IP address of a host which cannot be resolved by DNS. You might also see this error for hosts in a cluster. This indicates that the appliance can connect to the cluster, but the cluster returns host names that are not FQDNs.
-Error 50004: "Can't connect to a host or cluster because the server name can't be resolved. WinRM error code: 0x803381B9" might occur if the Azure DNS service for the appliance can't resolve the cluster or host name you provided.
+**Remediation**
-- If you see this error on the cluster, cluster FQDN.-- You might also see this error for hosts in a cluster. This indicates that the appliance can connect to the cluster, but the cluster returns host names that aren't FQDNs. To resolve this error, update the hosts file on the appliance by adding a mapping of the IP address and host names:
- 1. Open Notepad as an admin.
- 2. Open the C:\Windows\System32\Drivers\etc\hosts file.
- 3. Add the IP address and host name in a row. Repeat for each host or cluster where you see this error.
- 4. Save and close the hosts file.
- 5. Check whether the appliance can connect to the hosts, using the appliance management app. After 30 minutes, you should see the latest information for these hosts in the Azure portal.
+To resolve this error, update the hosts file on the appliance by adding a mapping of the IP address and host names:
+1. Open Notepad as an admin.
+1. Open the C:\Windows\System32\Drivers\etc\hosts file.
+1. Add the IP address and host name in a row. Repeat for each host or cluster where you see this error.
+1. Save and close the hosts file.
+1. Check whether the appliance can connect to the hosts, using the appliance management app. After 30 minutes, you should see the latest information for these hosts in the Azure portal.
-## Error 60001: Unable to connect to server
+## "Unable to connect to server" during validation of Physical servers
-- Ensure there is connectivity from the appliance to the server-- If it is a linux server, ensure password-based authentication is enabled using the following steps:
+**Remediation**
+
+- Ensure there is connectivity from the appliance to the target server.
+- If it is a Linux server, ensure password-based authentication is enabled using the following steps:
1. Log in to the linux server and open the ssh configuration file using the command 'vi /etc/ssh/sshd_config' 2. Set "PasswordAuthentication" option to yes. Save the file. 3. Restart ssh service by running "service sshd restart"-- If it is a windows server, ensure the port 5985 is open to allow for remote WMI calls.-- If you are discovering a GCP linux server and using a root user, use the following commands to change the default setting for root login
+- If it is a Windows server, ensure the port 5985 is open to allow for remote WMI calls.
+- If you are discovering a GCP Linux server and using a root user, use the following commands to change the default setting for root login
1. Log in to the linux server and open the ssh configuration file using the command 'vi /etc/ssh/sshd_config' 2. Set "PermitRootLogin" option to yes. 3. Restart ssh service by running "service sshd restart"
-## Error: No suitable authentication method found
+## "Failed to fetch BIOS GUID" for server during validation
+
+**Error**
+
+The validation of a physical server fails on the appliance with the error message-"Failed to fetch BIOS GUID".
+
+**Remediation**
+
+**Linux servers:**
+Connect to the target server that is failing validation and run the following commands to see if it returns the BIOS GUID of the server:
+````
+cat /sys/class/dmi/id/product_uuid
+dmidecode | grep -i uuid | awk '{print $2}'
+````
+You can also run the commands from command prompt on the appliance server by making an SSH connection with the target Linux server using the following command:
+````
+ssh <username>@<servername>
+````
+
+**Windows servers:**
+Run the following code in PowerShell from the appliance server for the target server that is failing validation to see if it returns the BIOS GUID of the server:
+````
+[CmdletBinding()]
+Param(
+ [Parameter(Mandatory=$True,Position=1)]
+ [string]$Hostname
+)
+$HostNS = "root\cimv2"
+$error.Clear()
+
+$Cred = Get-Credential
+$Session = New-CimSession -Credential $Cred -ComputerName $Hostname
+
+if ($Session -eq $null -or $Session.TestConnection() -eq $false)
+{
+ Write-Host "Connection failed with $Hostname due to $error"
+ exit -1
+}
+
+Write-Host "Connection established with $Hostname"
+#Get-WmiObject -Query "select uuid from Win32_ComputerSystemProduct"
+
+$HostIntance = $Session.QueryInstances($HostNS, "WQL", "Select UUID from Win32_ComputerSystemProduct")
+$HostIntance | fl *
+````
+
+On executing the code above, you need to provide the hostname of the target server which can be IP address/FQDN/hostname. After that you will be prompted to provide the credentials to connect to the server.
+
+## "No suitable authentication method found" for server during validation
+
+**Error**
+
+You are getting this error when you are trying to validate a Linux server through the physical appliance- ΓÇ£No suitable authentication method foundΓÇ¥.
+
+**Remediation**
Ensure password-based authentication is enabled on the linux server using the following steps:
Ensure password-based authentication is enabled on the linux server using the fo
2. Set "PasswordAuthentication" option to yes. Save the file. 3. Restart ssh service by running "service sshd restart"
+## "Access is denied" when connecting to physical servers during validation
+
+**Error**
+
+You are getting this error when you are trying to validate a Windows server through the physical appliance- ΓÇ£WS-Management service cannot process the request. The WMI service returned an access denied error.ΓÇ¥
+
+**Remediation**
+
+- If you are getting this error, make sure that the user account provided(domain/local) on the appliance configuration manager has been added to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users.
+- If Remote management Users group isn't present then add user account to the group: WinRMRemoteWMIUsers_.
+- You can also check if the WS-Management protocol is enabled on the server by running following command in the command prompt of the target server.
+
+ ```` winrm qc ````
+- If you are still facing the issue, make sure that the user account has access permissions to CIMV2 Namespace and sub-namespaces in WMI Control Panel. You can set the access by following these steps:
+ 1. Go to the server which is failing validation on the appliance
+ 2. Search and select ΓÇÿRunΓÇÖ from the Start menu. In the ΓÇÿRunΓÇÖ dialog box, type wmimgmt.msc in the ΓÇÿOpen:ΓÇÖ text field and press enter.
+ 3. The wmimgmt console will open where you can find ΓÇ£WMI Control (Local)ΓÇ¥ in the left panel. Right-click on it and select ΓÇÿPropertiesΓÇÖ from the menu.
+ 4. In the ΓÇÿWMI Control (Local) PropertiesΓÇÖ dialog box, click on ΓÇÿSecuritiesΓÇÖ tab.
+ 5. On the Securities tab, expand the ΓÇ£RootΓÇ¥ folder in the namespace tree and select ΓÇ£cimv2ΓÇ¥ namespace.
+ 6. Click on ΓÇÿSecurityΓÇÖ button that will open ΓÇÿSecurity for ROOT\cimv2ΓÇÖ dialog box.
+ 7. Under ΓÇÿGroup or users namesΓÇÖ section, click on ΓÇÿAddΓÇÖ button to open ΓÇÿSelect Users, Computers, Service Accounts or GroupsΓÇÖ dialog box.
+ 8. Search for the user account, select it and click on ΓÇÿOKΓÇÖ button to return to the ΓÇÿSecurity for ROOT\cimv2ΓÇÖ dialog box.
+ 9. In the ΓÇÿGroup or users namesΓÇÖ section, select the user account just added and check if the following permissions are allowed:<br/>
+ Enable account <br/>
+ Remote enable
+ 10. Click on ΓÇ£ApplyΓÇ¥ to enable the permissions set on the user account.
+
+- The same steps are also applicable on a local user account for non-domain/workgroup servers but in some cases, [UAC](/windows/win32/wmisdk/user-account-control-and-wmi) filtering may block some WMI properties as the commands run as a standard user, so you can either use a local administrator account or disable UAC so that the local user account is not filtered and instead becomes a full administrator.
+- Disabling Remote UAC by changing the registry entry that controls Remote UAC is not recommended but may be necessary in a workgroup. The registry entry is HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\system\LocalAccountTokenFilterPolicy. When the value of this entry is zero (0), Remote UAC access token filtering is enabled. When the value is 1, remote UAC is disabled.
+
+## Appliance is disconnected
+
+**Error**
+
+You are getting "appliance is disconnected" error message when you try to enable replication on a few VMware servers from the portal.
+
+This can happen if the appliance is in a shut-down state or the DRA service on the appliance cannot communicate with Azure.
+
+**Remediation**
+
+ 1. Go to the appliance configuration manager and rerun prerequisites to see the status of the DRA service under **View appliance services**.
+ 1. If the service is not running, stop and restart the service from the command prompt, using following commands:
+
+ ````
+ net stop dra
+ net start dra
+ ````
## Next steps
migrate Tutorial Assess Physical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-assess-physical.md
Run an assessment as follows:
1. On the **Overview** page > **Windows, Linux and SQL Server**, click **Assess and migrate servers**.
- ![Location of Assess and migrate servers button](./media/tutorial-assess-vmware-azure-vm/assess.png)
+ ![Location of Assess and migrate servers button.](./media/tutorial-assess-vmware-azure-vm/assess.png)
2. In **Azure Migrate: Discovery and assessment**, click **Assess**.
migrate Tutorial Discover Physical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-discover-physical.md
Title: Discover physical servers with Azure Migrate Discovery and assessment description: Learn how to discover on-premises physical servers with Azure Migrate Discovery and assessment.--++ ms. Last updated 03/11/2021
If you just created a free Azure account, you're the owner of your subscription.
Set up an account that the appliance can use to access the physical servers. -- For **Windows servers**, use a domain account for domain-joined servers, and a local account for server that are not domain-joined. The user account should be added to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users.
+**Windows servers**
+
+- For Windows servers, use a domain account for domain-joined servers, and a local account for servers that are not domain-joined.
+- The user account should be added to these groups: Remote Management Users, Performance Monitor Users, and Performance Log Users.
+- If Remote management Users group isn't present, then add user account to the group: **WinRMRemoteWMIUsers_**.
+- The account needs these permissions for appliance to create a CIM connection with the server and pull the required configuration and performance metadata from the WMI classes listed here.
+- In some cases, adding the account to these groups may not return the required data from WMI classes as the account might be filtered by [UAC](/windows/win32/wmisdk/user-account-control-and-wmi). To overcome the UAC filtering, user account needs to have necessary permissions on CIMV2 Namespace and sub-namespaces on the target server. You can follow the steps [here](troubleshoot-appliance.md) to enable the required permissions.
+ > [!Note]
- > For Windows Server 2008 and 2008 R2, ensure that WMF 3.0 is installed on the servers and the domain/local account used to access the servers is added to these groups: Performance Monitor Users, Performance Log Users and WinRMRemoteWMIUsers.
+ > For Windows Server 2008 and 2008 R2, ensure that WMF 3.0 is installed on the servers.
+
+**Linux servers**
+
+- You need a root account on the servers that you want to discover. Alternately, you can provide a user account with sudo permissions.
+- The support to add a user account with sudo access is provided by default with the new appliance installer script downloaded from portal after July 20,2021.
+- For older appliances, you can enable the capability by following these steps:
+ 1. On the server running the appliance, open the Registry Editor.
+ 1. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance.
+ 1. Create a registry key ΓÇÿisSudoΓÇÖ with DWORD value of 1.
+
+ :::image type="content" source="./media/tutorial-discover-physical/issudo-reg-key.png" alt-text="Screenshot that shows how to enable sudo support.":::
+
+- To discover the configuration and performance metadata from target server, you need to enable sudo access for the commands listed [here](migrate-appliance.md#linux-server-metadata). Make sure that you have enabled 'NOPASSWD' for the account to run the required commands without prompting for a password every time sudo command is invoked.
+- The following Linux OS distributions are supported for discovery by Azure Migrate using an account with sudo access:
-- For **Linux servers**, you need a root account on the Linux servers that you want to discover. Alternately, you can set a non-root account with the required capabilities using the following commands:
+ Operating system | Versions
+ |
+ Red Hat Enterprise Linux | 6,7,8
+ Cent OS | 6.6, 8.2
+ Ubuntu | 14.04,16.04,18.04
+ SUSE Linux | 11.4, 12.4
+ Debian | 7, 10
+ Amazon Linux | 2.0.2021
+ CoreOS Container | 2345.3.0
+
+- If you cannot provide root account or user account with sudo access, then you can set 'isSudo' registry key to value '0' in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureAppliance registry and provide a non-root account with the required capabilities using the following commands:
**Command** | **Purpose** | |
setcap "cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_setuid,<b
setcap CAP_DAC_READ_SEARCH+eip /usr/sbin/dmidecode | To collect BIOS serial number chmod a+r /sys/class/dmi/id/product_uuid | To collect BIOS GUID - ## Set up a project Set up a new project.
To set up the appliance you:
1. Provide an appliance name and generate a project key in the portal. 2. Download a zipped file with Azure Migrate installer script from the Azure portal. 3. Extract the contents from the zipped file. Launch the PowerShell console with administrative privileges.
-4. Execute the PowerShell script to launch the appliance web application.
+4. Execute the PowerShell script to launch the appliance configuration manager.
5. Configure the appliance for the first time, and register it with the project using the project key. ### 1. Generate the project key
To set up the appliance you:
1. After the successful creation of the Azure resources, a **project key** is generated. 1. Copy the key as you will need it to complete the registration of the appliance during its configuration.
+ [ ![Selections for Generate Key.](./media/tutorial-assess-physical/generate-key-physical-inline-1.png)](./media/tutorial-assess-physical/generate-key-physical-expanded-1.png#lightbox)
+ ### 2. Download the installer script In **2: Download Azure Migrate appliance**, click on **Download**.
Check that the zipped file is secure, before you deploy it.
1. On the server to which you downloaded the file, open an administrator command window. 2. Run the following command to generate the hash for the zipped file: - ```C:\>CertUtil -HashFile <file_location> [Hashing Algorithm]```
- - Example usage for public cloud: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller-Server-Public.zip SHA256 ```
- - Example usage for government cloud: ``` C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller-Server-USGov.zip SHA256 ```
-3. Verify the latest appliance versions and hash values:
- - For the public cloud:
+ - Example usage: ```C:\>CertUtil -HashFile C:\Users\administrator\Desktop\AzureMigrateInstaller.zip SHA256 ```
+3. Verify the latest appliance version and hash value:
- **Scenario** | **Download*** | **Hash value**
- | |
- Physical (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140334) | ce5e6f0507936def8020eb7b3109173dad60fc51dd39c3bd23099bc9baaabe29
+ **Download** | **Hash value**
+ |
+ [Latest version](https://go.microsoft.com/fwlink/?linkid=2140334) | 15a94b637a39c53ac91a2d8b21cc3cca8905187e4d9fb4d895f4fa6fd2f30b9f
- - For Azure Government:
+> [!NOTE]
+> The same script can be used to set up Physical appliance for either Azure public or Azure Government cloud with public or private endpoint connectivity.
- **Scenario** | **Download*** | **Hash value**
- | |
- Physical (85.8 MB) | [Latest version](https://go.microsoft.com/fwlink/?linkid=2140338) | ae132ebc574caf231bf41886891040ffa7abbe150c8b50436818b69e58622276
-
### 3. Run the Azure Migrate installer script
-The installer script does the following:
--- Installs agents and a web application for physical server discovery and assessment.-- Install Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.-- Download and installs an IIS rewritable module.-- Updates a registry key (HKLM) with persistent setting details for Azure Migrate.-- Creates the following files under the path:
- - **Config Files**: %Programdata%\Microsoft Azure\Config
- - **Log Files**: %Programdata%\Microsoft Azure\Logs
-
-Run the script as follows:
1. Extract the zipped file to a folder on the server that will host the appliance. Make sure you don't run the script on a server with an existing Azure Migrate appliance. 2. Launch PowerShell on the above server with administrative (elevated) privilege. 3. Change the PowerShell directory to the folder where the contents have been extracted from the downloaded zipped file. 4. Run the script named **AzureMigrateInstaller.ps1** by running the following command:
- - For the public cloud:
- ``` PS C:\Users\administrator\Desktop\AzureMigrateInstaller-Server-Public> .\AzureMigrateInstaller.ps1 ```
- - For Azure Government:
-
- ``` PS C:\Users\Administrators\Desktop\AzureMigrateInstaller-Server-USGov>.\AzureMigrateInstaller.ps1 ```
+ ``` PS C:\Users\administrator\Desktop\AzureMigrateInstaller> .\AzureMigrateInstaller.ps1 ```
+
+5. Select from the scenario, cloud and connectivity options to deploy an appliance with the desired configuration. For instance, the selection shown below sets up an appliance to discover and assess **physical servers** _(or servers running on other clouds like AWS, GCP, Xen etc.)_ to an Azure Migrate project with **default _(public endpoint)_ connectivity** on **Azure public cloud**.
+
+ :::image type="content" source="./media/tutorial-discover-physical/script-physical-default-inline.png" alt-text="Screenshot that shows how to set up appliance with desired configuration" lightbox="./media/tutorial-discover-physical/script-physical-default-expanded.png":::
- The script will launch the appliance web application when it finishes successfully.
+6. The installer script does the following:
-If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
+ - Installs agents and a web application.
+ - Install Windows roles, including Windows Activation Service, IIS, and PowerShell ISE.
+ - Download and installs an IIS rewritable module.
+ - Updates a registry key (HKLM) with persistent setting details for Azure Migrate.
+ - Creates the following files under the path:
+ - **Config Files**: %Programdata%\Microsoft Azure\Config
+ - **Log Files**: %Programdata%\Microsoft Azure\Logs
+
+After the script has executed successfully, the appliance configuration manager will be launched automatically.
+
+> [!NOTE]
+> If you come across any issues, you can access the script logs at C:\ProgramData\Microsoft Azure\Logs\AzureMigrateScenarioInstaller_<em>Timestamp</em>.log for troubleshooting.
### Verify appliance access to Azure
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/migrate/tutorial-discover-vmware.md
Title: Discover servers running in a VMware environment with Azure Migrate Discovery and assessment description: Learn how to discover on-premises servers, applications, and dependencies in a VMware environment by using the Azure Migrate Discovery and assessment tool.--++ ms. Last updated 03/25/2021
If validation fails, you can select a **Failed** status to see the validation er
### Start discovery
-To start vCenter Server discovery, in **Step 3: Provide server credentials to perform software inventory, agentless dependency analysis and discovery of SQL Server instances and databases**, select **Start discovery**. After the discovery is successfully initiated, you can check the discovery status by looking at the vCenter Server IP address or FQDN in the sources table.
-
-> [!NOTE]
-> Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances when the [TrustServerCertificate](/dotnet/api/system.data.sqlclient.sqlconnectionstringbuilder.trustservercertificate) property is set to `true`. The transport layer uses SSL to encrypt the channel and bypass the certificate chain to validate trust. The appliance server must be set up to [trust the certificate's root authority](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine).
->
-> If no certificate has been provisioned on the server when it starts, SQL Server generates a self-signed certificate that's used to encrypt login packets. [Learn more](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine).
->
+To start vCenter Server discovery, select **Start discovery**. After the discovery is successfully initiated, you can check the discovery status by looking at the vCenter Server IP address or FQDN in the sources table.
## How discovery works
To start vCenter Server discovery, in **Step 3: Provide server credentials to pe
* [Software inventory](how-to-discover-applications.md) identifies the SQL Server instances that are running on the servers. Using the information it collects, the appliance attempts to connect to the SQL Server instances through the Windows authentication credentials or the SQL Server authentication credentials that are provided on the appliance. Then, it gathers data on SQL Server databases and their properties. The SQL Server discovery is performed once every 24 hours. * Appliance can connect to only those SQL Server instances to which it has network line of sight, whereas software inventory by itself may not need network line of sight. * Discovery of installed applications might take longer than 15 minutes. The duration depends on the number of discovered servers. For 500 servers, it takes approximately one hour for the discovered inventory to appear in the Azure Migrate project in the portal.
-* During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable agentless dependency analysis.
+* During software inventory, the added server credentials are iterated against servers and validated for agentless dependency analysis. When the discovery of servers is finished, in the portal, you can enable agentless dependency analysis on the servers. Only the servers on which validation succeeds can be selected to enable [agentless dependency analysis](how-to-create-group-machine-dependencies-agentless.md).
* SQL Server instances and databases data begin to appear in the portal within 24 hours after you start discovery.
+* By default, Azure Migrate uses the most secure way of connecting to SQL instances i.e. Azure Migrate encrypts communication between the Azure Migrate appliance and the source SQL Server instances by setting the TrustServerCertificate property to `true`. Additionally, the transport layer uses SSL to encrypt the channel and bypass the certificate chain to validate trust. Hence, the appliance server must be set up to trust the certificate's root authority. However, you can modify the connection settings, by selecting **Edit SQL Server connection properties** on the appliance.[Learn more](https://go.microsoft.com/fwlink/?linkid=2158046) to understand what to choose.
+
+ :::image type="content" source="./media/tutorial-discover-vmware/sql-connection-properties.png" alt-text="Screenshot that shows how to edit SQL Server connection properties.":::
## Next steps
postgresql Tutorial Django Aks Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/tutorial-django-aks-database.md
In this quickstart, you deploy a Django application on Azure Kubernetes Service
## Create a resource group
-An Azure resource group is a logical group in which Azure resources are deployed and managed. Let's create a resource group, *django-project* using the [az-group-create](/cli/azure/groupt#az_group_create) command in the *eastus* location.
+An Azure resource group is a logical group in which Azure resources are deployed and managed. Let's create a resource group, *django-project* using the [az-group-create](/cli/azure/group#az_group_create) command in the *eastus* location.
```azurecli-interactive az group create --name django-project --location eastus
search Search Security Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-api-keys.md
Cognitive Search uses API keys as its primary authentication methodology. For in
API keys are generated when the service created. Passing a valid API key on the request is considered proof that the request is from an authorized client. There are two kinds of keys. *Admin keys* convey write permissions on the service and also grant rights to query system information. *Query keys* convey read permissions and can be used by apps to query a specific index. > [!NOTE]
-> Authorization for data plane operations using Azure role-based access control (RBAC) is now in preview. You can use this preview capability if you want to [use role assignments instead of API keys](search-security-rbac.md).
+> Authorization for data plane operations using Azure role-based access control (RBAC) is now in preview. You can use this preview capability to supplement or replace API keys [with Azure roles for Search](search-security-rbac.md).
## Using API keys in search
After you create new keys via portal or management layer, access is restored to
## Secure API keys
-Through [role-based permissions](search-security-rbac.md), you can delete or read the keys, but you can't replace a key with a user-defined password or use Active Directory as the primary authentication methodology for accessing search operations.
+[Role assignments](search-security-rbac.md) determine who can read and manage keys. Members of the following roles can view and regenerate keys: Owner, Contributor, [Search Service Contributors](../role-based-access-control/built-in-roles.md#search-service-contributor). The Reader role does not have access to API keys.
-Key security is ensured by restricting access via the portal or Resource Manager interfaces (PowerShell or command-line interface). As noted, subscription administrators can view and regenerate all API keys. As a precaution, review role assignments to understand who has access to the admin keys.
+Subscription administrators can view and regenerate all API keys. As a precaution, review role assignments to understand who has access to the admin keys.
-+ In the service dashboard, click **Access control (IAM)** and then the **Role assignments** tab to view role assignments for your service.
-
-Members of the following roles can view and regenerate keys: Owner, Contributor, [Search Service Contributors](../role-based-access-control/built-in-roles.md#search-service-contributor)
+1. Navigate to your search service page in Azure portal.
+1. On the left navigation pane, select **Access control (IAM)**, and then select the **Role assignments** tab.
+1. Set **Scope** to **This resource** to view role assignments for your service.
## See also
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-rbac.md
Previously updated : 07/19/2021 Last updated : 07/23/2021 # Use role-based authorization in Azure Cognitive Search Azure provides a global [role-based access control (RBAC) authorization system](../role-based-access-control/role-assignments-portal.md) for all services running on the platform. In Cognitive Search, you can use role authorization in the following ways:
-+ Grant permissions for control plane operations on the search service itself through Owner, Contributor, and Reader roles.
++ Allow access to control plane operations on the search service itself through Owner, Contributor, and Reader roles.
-+ Grant permissions for data plane operations, such as creating or querying indexes. This capability is currently in public preview.
++ Allow access to data plane operations, such as creating or querying indexes. This capability is currently in public preview.
-+ Grant outbound indexer access to external Azure data sources, applicable when you [configure a managed identity](search-howto-managed-identities-data-sources.md) to run the search service under. For a search service that is assigned to a managed identity, you can create roles assignments that extend external data services, such as Azure Blob Storage, to allow read access on blobs by your trusted search service.
++ Allow outbound indexer connections to be made [using a managed identity](search-howto-managed-identities-data-sources.md). For a search service that has a managed identity assigned to it, you can create roles assignments that extend external data services, such as Azure Blob Storage, to allow read access on blobs by your trusted search service.
-This article focuses on roles for control plane and data plane operations. For more information about outbound indexer calls, start with [Configure a managed identity](search-howto-managed-identities-data-sources.md).
+This article focuses on the first two bullets, roles for control plane and data plane operations. For more information about outbound indexer calls, start with [Configure a managed identity](search-howto-managed-identities-data-sources.md).
-A few RBAC scenarios are **not** supported, and these include:
+A few RBAC scenarios are **not** directly supported, and these include:
+ [Custom roles](../role-based-access-control/custom-roles.md)
A few RBAC scenarios are **not** supported, and these include:
> For document-level security, a workaround is to use [security filters](search-security-trimming-for-azure-search.md) to trim results by user identity, removing documents for which the requestor should not have access. >
-## Azure roles used in Search
+## Roles used in Search
-Built-in roles include generally available and preview roles, whose assigned membership consists of Azure Active Directory users and groups under the same tenant.
+Built-in roles include generally available and preview roles, whose assigned membership consists of Azure Active Directory users and groups.
Role assignments are cumulative and pervasive across all tools and client libraries used to create or manage a search service. Clients include the Azure portal, Management REST API, Azure PowerShell, Azure CLI, and the management client library of Azure SDKs.
+There are no regional, tier, or pricing restrictions for using RBAC on Azure Cognitive Search.
+ | Role | Status | Applies to | Description | | - | -| - | -- | | [Owner](../role-based-access-control/built-in-roles.md#owner) | Stable | Control plane | Full access to the resource, including the ability to assign Azure roles. Subscription administrators are members by default. | | [Contributor](../role-based-access-control/built-in-roles.md#contributor) | Stable | Control plane | Same level of access as Owner, minus the ability to assign roles. |
-| [Reader](../role-based-access-control/built-in-roles.md#reader) | Stable | Control plane | Limited access to partial service information. In the portal, the Reader role can access information in the service Overview page, in the Essentials section and under the Monitoring tab. All other tabs and pages are off limits. </br></br>This role has access to service information: resource group, service status, location, subscription name and ID, tags, URL, pricing tier, replicas, partitions, and search units. </br></br>This role also has access to service metrics: search latency, percentage of throttled requests, average queries per second. </br></br>There is no access to content (indexes or synonym maps) or content metrics (storage consumed, number of objects). |
+| [Reader](../role-based-access-control/built-in-roles.md#reader) | Stable | Control plane | Limited access to partial service information. In the portal, the Reader role can access information in the service Overview page, in the Essentials section and under the Monitoring tab. All other tabs and pages are off limits. </br></br>This role has access to service information: resource group, service status, location, subscription name and ID, tags, URL, pricing tier, replicas, partitions, and search units. </br></br>This role also has access to service metrics: search latency, percentage of throttled requests, average queries per second. </br></br>There is no access to API keys, role assignments, content (indexes or synonym maps), or content metrics (storage consumed, number of objects). |
| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | Preview | Control plane | Provides full access to search service and object definitions, but no access to indexed data. This role is intended for service administrators who need more information than what the Reader role provides, but who should not have access to index or synonym map content.| | [Search Index Data Contributor](../role-based-access-control/built-in-roles.md#search-index-data-contributor) | Preview | Data plane | Provides full access to index data, but nothing else. This role is for developers or index owners who are responsible for creating and loading content, but who should not have access to search service information. The scope is all top-level resources (indexers, synonym maps, indexers, data sources, skillsets) on the search service. | | [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader) | Preview | Data plane | Provides read-only access to index data. This role is for users who run queries against an index. The scope is all top-level resources (indexers, synonym maps, indexers, data sources, skillsets) on the search service. |
Azure resources have the concept of [control plane and data plane](../azure-reso
| Control plane | Operations include create, update, and delete services, manage API keys, adjust partitions and replicas, and so forth. The [Management REST API](/rest/api/searchmanagement/) and equivalent client libraries define the operations applicable to the control plane. | | Data plane | Operations against the search service endpoint, encompassing all objects and data hosted on the service. Indexing, querying, and all associated actions target the data plane, which is accessed via the [Search REST API](/rest/api/searchservice/) and equivalent client libraries. </br></br>Currently, you cannot use role assignments to restrict access to individual indexes, synonym maps, indexers, data sources, or skillsets. |
-## How to assign roles
-
-Roles can be assigned using any of the [supported approaches](../role-based-access-control/role-assignments-steps.md) described in role-based access control documentation.
-
-For just the preview roles described above, you will need to also configure your search service to support authorization, and modify code to use an authorization header in requests.
-
-### [**Stable roles**](#tab/rbac-ga)
-
-Stable roles refer to those that are generally available. Currently, these roles grant access to *service* information and admin operations. None of these roles will grant access rights to the search service endpoint itself or to content and operations on the service. Authentication to the endpoint is through [API keys](search-security-api-keys.md).
-
-A role assignment for search service administration is required. If you manage a search service, you are either an Owner or a Contributor. If you created a search service, you are automatically an Owner.
-
-+ Stable roles: [Owner](../role-based-access-control/built-in-roles.md#owner), [Contributor](../role-based-access-control/built-in-roles.md#contributor), [Reader](../role-based-access-control/built-in-roles.md#reader)
-+ Applies to: Control plane (or service administration)
-
-No service configuration is required. To assign roles, use one of the [approaches supported for Azure role assignments](../role-based-access-control/role-assignments-steps.md).
-
-### [**Preview roles**](#tab/rbac-preview)
+## Configure Search for data plane authentication
-Several new roles are now in public preview that extend Azure roles to the search endpoint, which means you can disable or limit the dependency on API keys for authentication to search content and operations.
+If you are using any of the preview data plane roles (Search Index Data Contributor or Search Index Data Reader) and Azure AD authentication, your search service must be configured to recognize an **authorization** header on data requests that provides an OAuth2 access token.
-+ Preview roles: [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor), [Search Index Data Contributor](../role-based-access-control/built-in-roles.md#search-index-data-contributor), [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader)
-+ Applies to: Control plane and data plane operations
+You can skip this step if you are using API keys only.
-There are no regional, tier, or pricing restrictions for using RBAC on Azure Cognitive Search.
-
-#### Step 1: Configure the search service
-
-A search service must be configured to support role-based authentication on the endpoint. Use [Create or Update](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) to set [DataPlaneAuthOptions](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions). You'll use the Management REST API version 2021-04-01-Preview to enable role-based authorization for data plane operations.
+### [**Azure portal**](#tab/config-svc-portal)
-Alternatively, you can use the Azure portal to update an existing search service.
+Set the feature flag on the portal URL to work with the preview roles: Search Service Contributor, Search Index Data Contributor, and Search Index Data Reader.
1. Open the portal with this syntax: [https://ms.portal.azure.com/?feature.enableRbac=true](https://ms.portal.azure.com/?feature.enableRbac=true).
Alternatively, you can use the Azure portal to update an existing search service
| Role-based access control | Preview | Requires membership in a role assignment to complete the task, described in the next step. It also requires an authorization header. Choosing this option limits you to clients that support the 2021-04-30-preview REST API. | | Both | Preview | Requests are valid using either an API key or an authorization token. |
-#### Step 2: Assign users or groups
+Once your search service is RBAC-enabled, the portal will require the feature flag in the URL for assigning roles and viewing content. **Content, such as indexes and indexers, will only be visible in the portal if you open it with the feature flag.** If you want to restore the default behavior at a later date, revert the API Keys selection to **API Keys**.
+
+### [**REST API**](#tab/config-svc-rest)
+
+Use the Management REST API, version 2021-04-01-Preview, to configure your service.
+
+1. Call [Create or Update Service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update).
+
+1. Set [DataPlaneAuthOptions](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions) to `aadOrApiKey`. See [this example](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#searchcreateorupdateserviceauthoptions) for syntax.
+
+1. Set [AadAuthFailureMode](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#aadauthfailuremode) to specify whether 401 or 403 errors are returned when authentication fails.
++
-Use one of the [supported approaches](../role-based-access-control/role-assignments-steps.md) for assigning Azure Active Directory users or groups to any of the preview roles: Search Service Contributor, Search Index Data Contributor, Search Index Data Reader.
+## Assign roles
-Alternatively, you can use the Azure portal:
+Roles can be assigned using any of the [supported approaches](../role-based-access-control/role-assignments-steps.md) described in Azure role-based access control documentation.
-1. Make sure the portal has been opened with this syntax: [https://ms.portal.azure.com/?feature.enableRbac=true](https://ms.portal.azure.com/?feature.enableRbac=true). You should see `feature.enableRbac=true` in the URL.
+You must be an Owner or have [Microsoft.Authorization/roleAssignments/write](/azure/templates/microsoft.authorization/roleassignments) permissions to manage role assignments.
+
+### [**Azure portal**](#tab/rbac-portal)
+
+Set the feature flag on the portal URL to work with the preview roles: Search Service Contributor, Search Index Data Contributor, and Search Index Data Reader.
+
+1. Open the portal with this syntax: [https://ms.portal.azure.com/?feature.enableRbac=true](https://ms.portal.azure.com/?feature.enableRbac=true). You should see `feature.enableRbac=true` in the URL.
1. Navigate to your search service.
-1. Select **Access Control** in the left navigation pane.
+1. Select **Access Control (IAM)** in the left navigation pane.
1. On the right side, under **Grant access to this resource**, select **Add role assignment**.
-1. Find an applicable role (Search Service Contributor, Search Index Data Contributor, Search Index Data Reader), and then assign an Azure Active Directory user or group identity.
+1. Find an applicable role (Owner, Contributor, Reader, Search Service Contributor, Search Index Data Contributor, Search Index Data Reader), and then assign an Azure Active Directory user or group identity.
+
+### [**PowerShell**](#tab/rbac-powershell)
+
+When [using PowerShell to assign roles](/role-based-access-control/role-assignments-powershell), call [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment), providing the Azure user or group name, and the scope of the assignment.
-#### Step 3: Configure requests
+Before you start, make sure you load the Azure and AzureAD modules and connect to Azure:
-To test programmatically, revise your code to use a Search REST API (any supported version) and set the authorization header on requests. If you are using the Azure SDKs, check their beta releases to see if the authorization header is available. Depending on your application, additional configuration is required to register it with Azure Active Directory or to determine how to get and pass an authorization token.
+```powershell
+Import-Module -Name Az
+Import-Module -Name AzureAD
+Connect-AzAccount
+```
-If you are using the portal, you can skip application configuration. The portal is updated to support the new Search Index Data roles.
+Scoped to the service, your syntax should look similar to the following example:
-#### Step 4: Test
+```powershell
+New-AzRoleAssignment -SignInName <email> `
+ -RoleDefinitionName "Search Index Data Contributor" `
+ -Scope "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Search/searchServices/<search-service>"
+```
+
+Scoped to an individual index:
+
+```powershell
+New-AzRoleAssignment -SignInName <email> `
+ -RoleDefinitionName "Search Index Data Contributor" `
+ -Scope "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Search/searchServices/<search-service>/indexes/<index-name>"
+```
+
+Recall that you can only scope access to top-level resources, such as indexes, synonym maps, indexers, data sources, and skillsets. You can't control access to search documents (index content) with Azure roles.
++
-Send requests to the reconfigured search service to verify role-based authorization for indexing and query tasks.
+## Configure requests and test
+
+To test programmatically, revise your code to use a Search REST API (any supported version) and set the authorization header on requests. If you are using one of the Azure SDKs, check their beta releases to see if the authorization header is available.
+
+Depending on your application, additional configuration is required to register an application with Azure Active Directory or to obtain and pass authorization tokens.
Alternatively, you can use the Azure portal and the roles assigned to yourself to test:
-1. Open the portal with this syntax: [https://ms.portal.azure.com/?feature.enableRbac=true](https://ms.portal.azure.com/?feature.enableRbac=true). Although your service is RBAC-enabled in a previous step, the portal will require the feature flag to invoke RBAC behaviors.
+1. Open the portal with this syntax: [https://ms.portal.azure.com/?feature.enableRbac=true](https://ms.portal.azure.com/?feature.enableRbac=true).
+
+ Although your service is RBAC-enabled in a previous step, the portal will require the feature flag to invoke RBAC behaviors. **Content, such as indexes and indexers, will only be visible in the portal if you open it with the feature flag.** If you want to restore the default behavior, revert the API Keys selection to **API Keys**.
1. Navigate to your search service.
Alternatively, you can use the Azure portal and the roles assigned to yourself t
+ For Search Index Data Contributor, select **New Index** to create a new index. Saving a new index will verify write access on the service. -
+## Disable API key authentication
+
+API keys cannot be deleted, but they can be disabled on your service. If you are using Search Index Data Contributor and Search Index Data Reader roles and Azure AD authentication, you can disable API keys, causing the search service to refuse all data-related requests providing a key.
+
+1. Set [DataPlaneAuthOptions](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions) to `aadOrApiKey`.
+
+1. Assign roles and verify they are working correctly.
+
+1. Set `disableLocalAuth` to **True**.
+
+If you revert the last step, setting `disableLocalAuth` to **False**, the search service will resume acceptance of API keys on the request automatically (assuming they are specified).
search Search What Is Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-what-is-azure-search.md
Azure Cognitive Search ([formerly known as "Azure Search"](whats-new.md#new-service-name)) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
-Search is foundational to any app that surfaces text content to users, with common scenarios including catalog or document search, e-commerce app search, or knowledge mining for data science.
+Search is foundational to any app that surfaces text content to users, with common scenarios including catalog or document search, retail product search, or knowledge mining for data science.
When you create a search service, you'll work with the following capabilities:
search Semantic Ranking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/semantic-ranking.md
Before scoring for relevance, content must be reduced to a manageable number of
Each document is now represented by a single long string.
+The string is composed of tokens, not characters or words. The maximum token count is 128 unique tokens. For estimation purposes, you can assume that 128 tokens is roughly equivalent to a string that is 128 words in length.
+ > [!NOTE]
-> The string is composed of tokens, not characters or words. The maximum token count is 128 unique tokens. For estimation purposes, you can assume that 128 tokens is roughly equivalent to a string that is 128 words in length.
->
>Tokenization is determined in part by the analyzer assignment on searchable fields. If you are using specialized analyzer, such as nGram or EdgeNGram, you might want to exclude that field from searchFields. For insights into how strings are tokenized, you can review the token output of an analyzer using the [Test Analyzer REST API](/rest/api/searchservice/test-analyzer). ## Extraction
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/whats-new.md
Learn what's new in the service. Bookmark this page to keep up to date with the
|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Description | Availability | ||--||
-| [Search REST API 2021-04-30-Preview](/rest/api/searchservice/index-preview) | Adds programmatic support for indexer connections using managed identities and Azure Active Directory (Azure AD) authentication. | Public preview |
-| [Role-based authorization (preview)](search-security-rbac.md) | Authenticate using Azure Active Directory and use new built-in roles to control access to indexes and indexing, eliminating or reducing the dependency on API keys. | Public preview, using Azure portal or the Management REST API version 2021-04-01-Preview to configure a search service for data plane authentication.|
-| [Management REST API 2021-04-01-Preview](/rest/api/searchmanagement/) | Modifies [Create or Update](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) service operations to support new [DataPlaneAuthOptions](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions). | Public preview |
+| [Search REST API 2021-04-30-Preview](/rest/api/searchservice/index-preview) | Adds REST API support for indexer connections made using [managed identities](search-howto-managed-identities-data-sources.md) and Azure Active Directory (Azure AD) authentication. | Public preview |
+| [Role-based authorization (preview)](search-security-rbac.md) | Authenticate using Azure Active Directory and new built-in roles for data plane access to indexes and indexing, eliminating or reducing the dependency on API keys. | Public preview, using Azure portal or the Management REST API version 2021-04-01-Preview to configure a search service for data plane authentication.|
+| [Management REST API 2021-04-01-Preview](/rest/api/searchmanagement/) | Modifies [Create or Update Service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) to support new [DataPlaneAuthOptions](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions). | Public preview |
## May 2021
security-center Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/alerts-reference.md
Azure Defender alerts for container hosts aren't limited to the alerts below. Ma
| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity | ||-|:--:||
-| **PREVIEW ΓÇô Access from a suspicious IP address**<br>(Storage.Blob_SuspiciousIp<br>Storage.Files_SuspiciousIp) | Indicates that this storage account has been successfully accessed from an IP address that is considered suspicious. This alert is powered by Microsoft Threat Intelligence.<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Initial Access | Medium |
+| **Access from a suspicious IP address**<br>(Storage.Blob_SuspiciousIp<br>Storage.Files_SuspiciousIp) | Indicates that this storage account has been successfully accessed from an IP address that is considered suspicious. This alert is powered by Microsoft Threat Intelligence.<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Initial Access | Medium |
| **PREVIEW - Anonymous scan of public storage containers**<br>(Storage.Blob_ContainerAnonymousScan) | A series of attempts were made to anonymously identify public containers in your storage account. This might indicate a reconnaissance attack, where the attacker scans your storage account to identify publicly accessible containers and then tries to find sensitive data inside them. <br>Applies to: Azure Blob Storage | PreAttack, Collection | Medium / High | | **PREVIEW ΓÇô Phishing content hosted on a storage account**<br>(Storage.Blob_PhishingContent<br>Storage.Files_PhishingContent) | A URL used in a phishing attack points to your Azure Storage account. This URL was part of a phishing attack affecting users of Microsoft 365.<br>Typically, content hosted on such pages is designed to trick visitors into entering their corporate credentials or financial information into a web form that looks legitimate.<br>This alert is powered by Microsoft Threat Intelligence.<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).<br>Applies to: Azure Blob Storage, Azure Files | Collection | High | | **PREVIEW - Storage account identified as source for distribution of malware**<br>(Storage.Files_WidespreadeAm) | Antimalware alerts indicate that an infected file(s) is stored in an Azure file share that is mounted to multiple VMs. If attackers gain access to a VM with a mounted Azure file share, they can use it to spread malware to other VMs that mount the same share.<br>Applies to: Azure Files | Lateral Movement, Execution | High |
To learn more about Azure Defender security alerts, see the following:
- [Security alerts in Azure Security Center](security-center-alerts-overview.md) - [Manage and respond to security alerts in Azure Security Center](security-center-managing-and-responding-alerts.md)-- [Continuously export Security Center data](continuous-export.md)
+- [Continuously export Security Center data](continuous-export.md)
security-center Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/release-notes.md
Previously updated : 07/21/2021 Last updated : 07/22/2021
To learn about *planned* changes that are coming soon to Security Center, see [I
Updates in July include:
+- [Enhancements to recommendation to enable Azure Disk Encryption (ADE)](#enhancements-to-recommendation-to-enable-azure-disk-encryption-ade)
- [Continuous export of secure score and regulatory compliance data released for General Availability (GA)](#continuous-export-of-secure-score-and-regulatory-compliance-data-released-for-general-availability-ga) - [Workflow automations can be triggered by changes to regulatory compliance assessments (GA)](#workflow-automations-can-be-triggered-by-changes-to-regulatory-compliance-assessments-ga) - [Assessments API field 'FirstEvaluationDate' and 'StatusChangeDate' now available in workspace schemas and logic apps](#assessments-api-field-firstevaluationdate-and-statuschangedate-now-available-in-workspace-schemas-and-logic-apps) - ['Compliance over time' workbook template added to Azure Monitor Workbooks gallery](#compliance-over-time-workbook-template-added-to-azure-monitor-workbooks-gallery)
+### Enhancements to recommendation to enable Azure Disk Encryption (ADE)
+
+Following user feedback, we've renamed the recommendation **Disk encryption should be applied on virtual machines**.
+
+The new recommendation uses the same assessment ID and is called **Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources**.
+
+The description has also been updated to better explain the purpose of this hardening recommendation:
+
+| Recommendation | Description | Severity |
+|--|--|:--:|
+| **Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources** | By default, a virtual machineΓÇÖs OS and data disks are encrypted-at-rest using platform-managed keys; temp disks and data caches arenΓÇÖt encrypted, and data isnΓÇÖt encrypted when flowing between compute and storage resources. For a comparison of different disk encryption technologies in Azure, see https://aka.ms/diskencryptioncomparison.<br>Use Azure Disk Encryption to encrypt all this data. Disregard this recommendation if: (1) youΓÇÖre using the encryption-at-host feature, or (2) server-side encryption on Managed Disks meets your security requirements. Learn more in Server-side encryption of Azure Disk Storage. | High |
+| | | |
++ ### Continuous export of secure score and regulatory compliance data released for General Availability (GA) [Continuous export](continuous-export.md) provides the mechanism for exporting your security alerts and recommendations for tracking with other monitoring tools in your environment.
Security Center's asset inventory page has been improved in the following ways:
:::image type="content" source="media/release-notes/adding-contains-exemption-filter.gif" alt-text="Adding the filter 'contains exemption' in Azure Security Center's asset inventory page":::
-Learn more about how to [Explore and manage your resources with asset inventory](asset-inventory.md).
+Learn more about how to [Explore and manage your resources with asset inventory](asset-inventory.md).
security-center Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/upcoming-changes.md
Previously updated : 07/13/2021 Last updated : 07/22/2021
If you're looking for the latest release notes, you'll find them in the [What's
| [Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013](#legacy-implementation-of-iso-27001-is-being-replaced-with-new-iso-270012013) | July 2021 | | [Deprecating recommendation 'Log Analytics agent health issues should be resolved on your machines'](#deprecating-recommendation-log-analytics-agent-health-issues-should-be-resolved-on-your-machines) | July 2021 | | [Logical reorganization of Azure Defender for Resource Manager alerts](#logical-reorganization-of-azure-defender-for-resource-manager-alerts) | August 2021 |
-| [Enhancements to recommendation to enable Azure Disk Encryption (ADE)](#enhancements-to-recommendation-to-enable-azure-disk-encryption-ade) | August 2021 |
| [Enhancements to recommendation to classify sensitive data in SQL databases](#enhancements-to-recommendation-to-classify-sensitive-data-in-sql-databases) | Q3 2021 | | [Enable Azure Defender security control to be included in secure score](#enable-azure-defender-security-control-to-be-included-in-secure-score) | Q3 2021 | | | |
These are the alerts that are currently part of Azure Defender for Resource Mana
Learn more about the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) and [Azure Defender for servers](defender-for-servers-introduction.md).
-### Enhancements to recommendation to enable Azure Disk Encryption (ADE)
-
-**Estimated date for change:** August 2021
-
-Following user feedback, we'll be revising the recommendation **Disk encryption should be applied on virtual machines**.
-
-The new recommendation will use the same assessment ID and will be called **Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources**.
-
-The description will also be updated to better explain the purpose of this hardening recommendation:
-
-| Recommendation | Description | Severity |
-|--|--|:--:|
-| **Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources** | By default, a virtual machineΓÇÖs OS and data disks are encrypted-at-rest using platform-managed keys; temp disks and data caches arenΓÇÖt encrypted, and data isnΓÇÖt encrypted when flowing between compute and storage resources. For a comparison of different disk encryption technologies in Azure, see https://aka.ms/diskencryptioncomparison.<br>Use Azure Disk Encryption to encrypt all this data. Disregard this recommendation if: (1) youΓÇÖre using the encryption-at-host feature, or (2) server-side encryption on Managed Disks meets your security requirements. Learn more in Server-side encryption of Azure Disk Storage. | High |
-| | | |
### Enhancements to recommendation to classify sensitive data in SQL databases
sentinel Audit Sentinel Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/audit-sentinel-data.md
LAQueryLogs
```
+## Monitor Azure Sentinel with workbooks, rules, and playbooks
+
+Use Azure Sentinel's own features to monitor events and actions that occur within Azure Sentinel.
+
+- **Monitor with workbooks**. The following workbooks were built to monitor workspace activity:
+
+ - **Workspace Auditing**. Includes information about which users in the environment are performing actions, which actions they have performed, and more.
+ - **Analytics Efficiency**. Provides insight into which analytic rules are being used, which MITRE tactics are most covered, and incidents generated from the rules.
+ - **Security Operations Efficiency**. Presents metrics on SOC team performance, incidents opened, incidents closed, and more. This workbook can be used to show team performance and highlight any areas that might be lacking that require attention.
+ - **Data collection health monitoring**. Helps watch for stalled or stopped ingestions.
+
+ For more information, see [Commonly used Azure Sentinel workbooks](top-workbooks.md).
+
+- **Watch for ingestion delay**. If you have concerns about ingestion delay, [set a variable in an analytics rule](https://techcommunity.microsoft.com/t5/azure-sentinel/handling-ingestion-delay-in-azure-sentinel-scheduled-alert-rules/ba-p/2052851) to represent the delay.
+
+ For example, the following analytics rule can help to ensure that results don't include duplicates, and that logs aren't missed when running the rules:
+
+ ```kusto
+ let ingestion_delay= 2min;let rule_look_back = 5min;CommonSecurityLog| where TimeGenerated >= ago(ingestion_delay + rule_look_back)| where ingestion_time() > (rule_look_back)
+ - Calculating ingestion delay
+ CommonSecurityLog| extend delay = ingestion_time() - TimeGenerated| summarize percentiles(delay,95,99) by DeviceVendor, DeviceProduct
+ ```
+
+ For more information, see [Automate incident handling in Azure Sentinel with automation rules](automate-incident-handling-with-automation-rules.md).
+
+- **Monitor data connector health** using the [Connector Health Push Notification Solution](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Send-ConnectorHealthStatus) playbook to watch for stalled or stopped ingestion, and send notifications when a connector has stopped collecting data or machines have stopped reporting.
+ ## Next steps In Azure Sentinel, use the **Workspace audit** workbook to audit the activities in your SOC environment.
-For more information, see [Tutorial: Visualize and monitor your data](tutorial-monitor-your-data.md).
+For more information, see [Visualize and monitor your data](tutorial-monitor-your-data.md).
sentinel Automate Responses With Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/automate-responses-with-playbooks.md
na ms.devlang: na Previously updated : 03/17/2021 Last updated : 06/29/2021
Many, if not most, of these alerts and incidents conform to recurring patterns t
A playbook is a collection of these remediation actions that can be run from Azure Sentinel as a routine. A playbook can help [**automate and orchestrate your threat response**](tutorial-respond-threats-playbook.md); it can be run manually or set to run automatically in response to specific alerts or incidents, when triggered by an analytics rule or an automation rule, respectively.
+For example, if an account and machine are compromised, a playbook can isolate the machine from the network and block the account by the time the SOC team is notified of the incident.
+ Playbooks are created and applied at the subscription level, but the **Playbooks** tab (in the new **Automation** blade) displays all the playbooks available across any selected subscriptions. ### Azure Logic Apps basic concepts
Another way to view API connections would be to go to the **All Resources** blad
In order to change the authorization of an existing connection, enter the connection resource, and select **Edit API connection**.
+## Recommended playbooks
+
+The following recommended playbooks, and other similar playbooks are available to you in the [Azure Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks):
+
+- **Notification playbooks** are triggered when an alert or incident is created and send a notification to a configured destination:
+
+ - [Post a message in a Microsoft Teams channel](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Post-Message-Teams)
+ - [Send an Outlook email notification](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Incident-Email-Notification)
+ - [Post a message in a Slack channel](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Post-Message-Slack)
+
+- **Blocking playbooks** are triggered when an alert or incident is created, gather entity information like the account, IP address, and host, and blocks them from further actions:
+
+ - [Prompt to block an IP address](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-IPs-on-MDATP-Using-GraphSecurity).
+ - [Block an AAD user](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Block-AADUser)
+ - [Reset an AAD user password](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Reset-AADUserPassword/)
+ - [Prompt to isolate a machine](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Isolate-AzureVMtoNSG)
+
+- **Create, update, or close playbooks** can create, update, or close incidents in Azure Sentinel, Microsoft 365 security services, or other ticketing systems:
+
+ - [Change an incident's severity](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Change-Incident-Severity)
+ - [Create a ServiceNow incident](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Create-SNOW-record)
++ ## Next steps - [Tutorial: Use playbooks to automate threat responses in Azure Sentinel](tutorial-respond-threats-playbook.md)
sentinel Azure Sentinel Billing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/azure-sentinel-billing.md
For data connectors that include both free and paid data types, you can select w
![Screenshot showing the Data connector page for MCAS, with the free Security Alerts selected and the paid MCASShadowITReporting not selected.](media/billing/data-types.png)
+> [!NOTE]
+> Data connectors listed as Public Preview do not generate cost. Data connectors generate cost only once becoming Generally Available (GA).
+>
+ ## Estimate Azure Sentinel costs If you're not yet using Azure Sentinel, you can use the [Azure Sentinel pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=azure-sentinel) to estimate the potential cost of using Azure Sentinel. Enter *Azure Sentinel* in the Search box and select the resulting Azure Sentinel tile. The pricing calculator helps you estimate your likely costs based on your expected data ingestion and retention. For example, you can enter the GB of daily data you expect to ingest in Azure Sentinel, and the region for your workspace. The calculator provides the aggregate monthly cost across these components: -- Log Analytics data ingestion -- Azure Sentinel data analysis -- Log Analytics data retention
+- Log Analytics data ingestion
+- Azure Sentinel data analysis
+- Log Analytics data retention
## Manage Azure Sentinel costs
Here are some other considerations for moving to a dedicated cluster for cost op
- All workspaces linked to a cluster must be in the same region. - The maximum of workspaces linked to a cluster is 1000. - You can unlink a linked workspace from your cluster. The number of link operations on a particular workspace is limited to two in a period of 30 days.-- You can't move an existing workspace to a customer managed key (CMK) cluster. You need to create the workspace in the cluster.
+- You can't move an existing workspace to a customer-managed key (CMK) cluster. You need to create the workspace in the cluster.
- Moving a cluster to another resource group or subscription isn't currently supported. - A workspace link to a cluster fails if the workspace is linked to another cluster.
sentinel Best Practices Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/best-practices-data.md
+
+ Title: Best practices for data collection in Azure Sentinel
+description: Learn about best practices to employ when connecting data sources to Azure Sentinel.
++++++ Last updated : 07/21/2021++
+# Data collection best practices
+
+This section reviews best practices for collecting data using Azure Sentinel data connectors. For more information, see [Connect data sources](connect-data-sources.md), [Azure Sentinel partner data connectors](partner-data-connectors.md), and the [Azure Sentinel solutions catalog](sentinel-solutions-catalog.md).
+
+## Prioritize your data connectors
+
+If it's unclear to you which data connectors will best serve your environment, start by enabling all [free data connectors](azure-sentinel-billing.md#free-data-sources).
+
+The free data connectors will start showing value from Azure Sentinel as soon as possible, while you continue to plan other data connectors and budgets.
+
+For your [partner](partner-data-connectors.md) and [custom](create-custom-connector.md) data connectors, start by setting up [Syslog](connect-syslog.md) and [CEF](connect-common-event-format.md) connectors, with the highest priority first, as well as any Linux-based devices.
+
+If your data ingestion becomes too expensive, too quickly, stop or filter the logs forwarded using the [Azure Monitor Agent](/azure/azure-monitor/agents/azure-monitor-agent-overview).
+
+> [!TIP]
+> Custom data connectors enable you to ingest data into Azure Sentinel from data sources not currently supported by built-in functionality, such as via agent, Logstash, or API. For more information, see [Resources for creating Azure Sentinel custom connectors](create-custom-connector.md).
+>
+
+## Filter your logs before ingestion
+
+You may want to filter the logs collected, or even log content, before the data is ingested into Azure Sentinel. For example, you may want to filter out logs that are irrelevant or unimportant to security operations, or you may want to remove unwanted details from log messages. Filtering message content may also be helpful when trying to drive down costs when working with Syslog, CEF, or Windows-based logs that have many irrelevant details.
+
+Filter your logs using one of the following methods:
+
+- **The Azure Monitor Agent**. Supported on both Windows and Linux to ingest [Windows security events](connect-windows-security-events.md). Filter the logs collected by configuring the agent to collect only specified events.
+
+- **Logstash**. Supports filtering message content, including making changes to the log messages. For more information, see [Connect with Logstash](create-custom-connector.md#connect-with-logstash).
+
+> [!IMPORTANT]
+> Using Logstash to filter your message content will cause your logs to be ingested as custom logs, causing any [free-tier logs](azure-sentinel-billing.md#free-data-sources) to become paid-tier logs.
+>
+> Custom logs also need to be worked into [analytics rules](automate-incident-handling-with-automation-rules.md), [threat hunting](hunting.md), and [workbooks](quickstart-get-visibility.md), as they aren't automatically added. Custom logs are also not currently supported for [Machine Learning](bring-your-own-ml.md) capabilities.
+>
+
+## Alternative data ingestion requirements
+
+Standard configuration for data collection may not work well for your organization, due to various challenges. The following tables describe common challenges or requirements, and possible solutions and considerations.
+
+> [!NOTE]
+> Many solutions listed below require a custom data connector. For more information, see [Resources for creating Azure Sentinel custom connectors](create-custom-connector.md).
+>
+
+### On-premises Windows log collection
++
+|Challenge / Requirement |Possible solutions |Considerations |
+||||
+|**Requires log filtering** | Use Logstash <br><br>Use Azure Functions <br><br> Use LogicApps <br><br> Use custom code (.NET, Python) | While filtering can lead to cost savings, and ingests only the required data, some Azure Sentinel features are not supported, such as [UEBA](identify-threats-with-entity-behavior-analytics.md), [entity pages](identify-threats-with-entity-behavior-analytics.md#entity-pages), [machine learning](bring-your-own-ml.md), and [fusion](fusion.md). <br><br>When configuring log filtering, you'll need to make updates in resources such as threat hunting queries and analytics rules |
+|**Agent cannot be installed** |Use Windows Event Forwarding, supported with the [Azure Monitor Agent](connect-windows-security-events.md#connector-options) | Using Windows Event forwarding lowers load-balancing events per second from the Windows Event Collector, from 10,000 events to 500-1000 events.|
+|**Servers do not connect to the internet** | Use the [Log Analytics gateway](/azure/azure-monitor/agents/gateway) | Configuring a proxy to your agent requires extra firewall rules to allow the Gateway to work. |
+|**Requires tagging and enrichment at ingestion** |Use Logstash to inject a ResourceID <br><br>Use an ARM template to inject the ResourceID into on-premises machines <br><br>Ingest the resource ID into separate workspaces | Log Analytics doesn't support RBAC for custom tables <br><br>Azure Sentinel doesnΓÇÖt support row-level RBAC <br><br>**Tip**: You may want to adopt cross workspace design and functionality for Azure Sentinel. |
+|**Requires splitting operation and security logs** | Use the [Microsoft Monitor Agent or Azure Monitor Agent](connect-windows-security-events.md) multi-home functionality | Multi-home functionality requires more deployment overhead for the agent. |
+|**Requires custom logs** | Collect files from specific folder paths <br><br>Use API ingestion <br><br>Use PowerShell <br><br>Use Logstash | You may have issues filtering your logs. <br><br>Custom methods are not supported. <br><br>Custom connectors may require developer skills. |
+| | | |
+
+### On-premises Linux log collection
+
+|Challenge / Requirement |Possible solutions |Considerations |
+||||
+|**Requires log filtering** | Use Syslog-NG <br><br>Use Rsyslog <br><br>Use FluentD configuration for the agent <br><br> Use the Azure Monitor Agent/Microsoft Monitoring Agent <br><br> Use Logstash | Some Linux distributions may not be supported by the agent. <br> <br>Using Syslog or FluentD requires developer knowledge. <br><br>For more information, see [Connect to Windows servers to collect security events](connect-windows-security-events.md) and [Resources for creating Azure Sentinel custom connectors](create-custom-connector.md). |
+|**Agent cannot be installed** | Use a Syslog forwarder, such as (syslog-ng or rsyslog. | |
+|**Servers do not connect to the internet** | Use the [Log Analytics gateway](/azure/azure-monitor/agents/gateway) | Configuring a proxy to your agent requires extra firewall rules to allow the Gateway to work. |
+|**Requires tagging and enrichment at ingestion** | Use Logstash for enrichment, or custom methods, such as API or EventHubs. | You may have extra effort required for filtering. |
+|**Requires splitting operation and security logs** | Use the [Azure Monitor Agent](connect-windows-security-events.md) with the multi-homing configuration. | |
+|**Requires custom logs** | Create a custom collector using the Microsoft Monitoring (Log Analytics) agent. | |
+| | | |
+
+### Endpoint solutions
+
+If you need to collect logs from Endpoint solutions, such as EDR, other security events, Sysmon, and so on, use one of the following methods:
+
+- **MTP connector** to collect logs from Microsoft 365 Defender for Endpoint. This option incurs extra costs for the data ingestion.
+- **Windows Event Forwarding**.
+
+> [!NOTE]
+> Load balancing cuts down on the events per second that can be processed to the workspace.
+>
+
+### Office data
+
+If you need to collect Microsoft Office data, outside of the standard connector data, use one of the following solutions:
+
+|Challenge / Requirement |Possible solutions |Considerations |
+||||
+|**Collect raw data from Teams, message trace, phishing data, and so on** | Use the built-in [Office 365 connector](connect-office-365.md) functionality, and then create a custom connector for other raw data. | Mapping events to the corresponding recordID may be challenging. |
+|**Requires RBAC for splitting countries, departments, and so on** | Customize your data collection by adding tags to data and creating dedicated workspaces for each separation needed.| Custom data collection has extra ingestion costs. |
+|**Requires multiple tenants in a single workspace** | Customize your data collection using Azure LightHouse and a unified incident view.| Custom data collection has extra ingestion costs. <br><br>For more information, see [Extend Azure Sentinel across workspaces and tenants](extend-sentinel-across-workspaces-tenants.md). |
+| | | |
+
+### Cloud platform data
+
+|Challenge / Requirement |Possible solutions |Considerations |
+||||
+|**Filter logs from other platforms** | Use Logstash <br><br>Use the Azure Monitor Agent / Microsoft Monitoring (Log Analytics) agent | Custom collection has extra ingestion costs. <br><br>You may have a challenge of collecting all Windows events vs only security events. |
+|**Agent cannot be used** | Use Windows Event Forwarding | You may need to load balance efforts across your resources. |
+|**Servers are in air-gapped network** | Use the [Log Analytics gateway](/azure/azure-monitor/agents/gateway) | Configuring a proxy to your agent requires firewall rules to allow the Gateway to work. |
+|**RBAC, tagging, and enrichment at ingestion** | Create custom collection via Logstash or the Log Analytics API. | RBAC is not supported for custom tables <br><br>Row-level RBAC is not supported for any tables. |
+| | | |
+
+## Next steps
+
+For more information, see:
+
+- [Pre-deployment activities and prerequisites for deploying Azure Sentinel](prerequisites.md)
+- [Best practices for Azure Sentinel](best-practices.md)
+- [Connect data sources](connect-data-sources.md)
sentinel Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/best-practices.md
+
+ Title: Best practices for Azure Sentinel
+description: Learn about best practices to employ when managing your Azure Sentinel workspace.
++++++ Last updated : 07/21/2021++
+# Best practices for Azure Sentinel
+
+This collection of best practices provides guidance to use when deploying, managing, and using Azure Sentinel, including links to other articles for more information.
+
+> [!IMPORTANT]
+> Before deploying Azure Sentinel, review and complete [pre-deployment activities and prerequisites](prerequisites.md).
+>
+## Regular SOC activities to perform
+
+Schedule the following Azure Sentinel activities regularly to ensure continued security best practices:
+
+### Daily tasks
+
+- **Triage and investigate incidents**. Review the Azure Sentinel **Incidents** page to check for new incidents generated by the currently configured analytics rules, and start investigating any new incidents. For more information, see [Tutorial: Investigate incidents with Azure Sentinel](tutorial-investigate-cases.md).
+
+- **Explore hunting queries and bookmarks**. Explore results for all built-in queries, and update existing hunting queries and bookmarks. Manually generate new incidents or update old incidents if applicable. For more information, see:
+
+ - [Automatically create incidents from Microsoft security alerts](create-incidents-from-alerts.md)
+ - [Hunt for threats with Azure Sentinel](hunting.md)
+ - [Keep track of data during hunting with Azure Sentinel](bookmarks.md)
+
+- **Analytic rules**. Review and enable new analytics rules as applicable, including both newly released or newly available rules from recently connected data connectors.
+
+- **Data connectors**. Review the status, date, and time of the last log received from each data connector to ensure that data is flowing. Check for new connectors, and review ingestion to ensure set limits haven't been exceeded. For more information, see [Data collection best practices](best-practices-data.md) and [Connect data sources](connect-data-sources.md).
+
+- **Log Analytics Agent**. Verify that servers and workstations are actively connected to the workspace, and troubleshoot and remediate any failed connections. For more information, see [Log Analytics Agent overview](/azure/azure-monitor/agents/log-analytics-agent).
+
+- **Playbook failures**. Verify playbook run statuses and troubleshoot any failures. For more information, see [Tutorial: Use playbooks with automation rules in Azure Sentinel](tutorial-respond-threats-playbook.md).
++
+### Weekly tasks
+
+- **Workbook updates**. Verify whether any workbooks have updates that need to be installed. For more information, see [Commonly used Azure Sentinel workbooks](top-workbooks.md).
+
+- **Azure Sentinel GitHub repository review**. Review the [Azure Sentinel GitHub](https://github.com/Azure/Azure-Sentinel) repository to explore whether there are any new or updated resources of value for your environment, such as analytics rules, workbooks, hunting queries, or playbooks.
+
+- **Azure Sentinel auditing**. Review Azure Sentinel activity to see who has updated or deleted resources, such as analytics rules, bookmarks, and so on. For more information, see [Audit Azure Sentinel queries and activities](audit-sentinel-data.md).
+
+### Monthly tasks
+
+- **Review user access**. Review permissions for your users and check for inactive users. For more information, see [Permissions in Azure Sentinel](roles.md).
+
+- **Log Analytics workspace review**. Review that the Log Analytics workspace data retention policy still aligns with your organization's policy. For more information, see [Data retention policy](/workplace-analytics/privacy/license-expiration) and [Integrate Azure Data Explorer for long-term log retention](store-logs-in-azure-data-explorer.md).
++
+## Integrate with Microsoft security services
+
+Azure Sentinel is empowered by the components that send data to your workspace, and is made stronger through integrations with other Microsoft services. Any logs ingested into products such as Microsoft Cloud App Security, Microsoft Defender for Endpoint, and Microsoft Defender for Identity allow these services to create detections, and in turn provide those detections to Azure Sentinel. Logs can also be ingested directly into Azure Sentinel to provide a fuller picture for events and incidents.
+
+For example, the following image shows how Azure Sentinel ingests data from other Microsoft services and multi-cloud and partner platforms to provide coverage for your environment:
++
+More than ingesting alerts and logs from other sources, Azure Sentinel also:
+
+- **Uses the information it ingests with [machine learning](bring-your-own-ml.md)** that allows for better event correlation, alert aggregation, anomaly detection, and more.
+- **Builds and presents interactive visuals via [workbooks](quickstart-get-visibility.md)**, showing trends, related information, and key data used for both admin tasks and investigations.
+- **Runs [playbooks](tutorial-respond-threats-playbook.md) to act on alerts**, gathering information, performing actions on items, and sending notifications to various platforms.
+- **Integrates with partner platforms**, such as ServiceNow and Jira, to provide essential services for SOC teams.
+- **Ingests and fetches enrichment feeds** from [threat intelligence platforms](threat-intelligence-integration.md) to bring valuable data for investigating.
+
+## Manage and respond to incidents
+
+The following image shows recommended steps in an incident management and response process.
++
+The following sections provide high-level descriptions for how to use Azure Sentinel features for incident management and response throughout the process. For more information, see [Tutorial: Investigate incidents with Azure Sentinel](tutorial-investigate-cases.md).
+
+### Use the Incidents page and the Investigation graph
+
+Start any triage process for new incidents on the Azure Sentinel **Incidents** page in Azure Sentinel and the **Investigation graph**.
+
+Discover key entities, such as accounts, URLs, IP address, host names, activities, timeline, and more. Use this data to understand whether you have a [false positive](false-positives.md) on hand, in which case you can close the incident directly.
+
+Any generated incidents are displayed on the **Incidents** page, which serves as the central location for triage and early investigation. The **Incidents** page lists the title, severity, and related alerts, logs, and any entities of interest. Incidents also provide a quick jump into collected logs and any tools related to the incident.
+
+The **Incidents** page works together with the **Investigation graph**, an interactive tool that allows users to explore and dive deep into an alert to show the full scope of an attack. Users can then construct a timeline of events and discover the extent of a threat chain.
+
+If you discover that the incident is a true positive, take action directly from the **Incidents** page to investigate logs, entities, and explore the threat chain. After you've identified the threat and created a plan of action, use other tools in Azure Sentinel and [other Microsoft security services](best-practices.md#integrate-with-microsoft-security-services) to continue investigating.
++
+### Handle incidents with workbooks
+
+In addition to [visualizing and displaying information and trends](quickstart-get-visibility.md), Azure Sentinel workbooks are valuable investigative tools.
+
+For example, use the [Investigation Insights](top-workbooks.md#investigation-insights) workbook to investigate specific incidents together with any associated entities and alerts. This workbook enables you to dive deeper into entities by showing related logs, actions, and alerts.
+
+### Handle incidents with threat hunting
+
+While investigating and searching for root causes, run built-in threat hunting queries and check results for any indicators of compromise.
+
+During an investigation, or after having taken steps to remediate and eradicate the threat, use [livestream](livestream.md) to monitor, in real time, whether there are any lingering malicious events, or if malicious events are still continuing.
+
+### Handle incidents with entity behavior
+
+Entity behavior in Azure Sentinel allows users to review and investigate actions and alerts for specific entities, such as investigating into accounts and host names. For more information, see:
+
+- [Enable User and Entity Behavior Analytics (UEBA) in Azure Sentinel](enable-entity-behavior-analytics.md)
+- [Investigate incidents with UEBA data](investigate-with-ueba.md)
+- [Azure Sentinel UEBA enrichments reference](ueba-enrichments.md)
+
+### Handle incidents with watchlists and threat intelligence
+
+To maximize threat intelligence-based detections, make sure to use [threat intelligence data connectors](connect-threat-intelligence-tip.md) to ingest indicators of compromise:
+
+- Connect data sources required by the [Fusion](fusion.md) and [TI Map alerts](import-threat-intelligence.md)
+- Ingest indicators from [TAXII and TIP platforms](connect-threat-intelligence.md)
+
+Use indicators of compromise in analytics rules, when threat hunting, investigating logs, or generating more incidents.
+
+Use a watchlist that combines data from ingested data and external sources, such as enrichment data. For example, create lists of IP address ranges used by your organization or recently terminated employees. Use watchlists with playbooks to gather enrichment data, such as adding malicious IP addresses to watchlists to use during detection, threat hunting, and investigations.
+
+During an incident, use watchlists to contain investigation data, and then delete them when your investigation is done to ensure that sensitive data does not remain in view.
+
+## Additional best practice references
+
+The Azure Sentinel documentation has more best practice guidance scattered throughout our articles. For example, see the following articles for more information:
+
+- **Admin users**:
+
+ - [Pre-deployment activities and prerequisites for deploying Azure Sentinel](prerequisites.md)
+ - [Azure Sentinel costs and billing](azure-sentinel-billing.md)
+ - [Permissions in Azure Sentinel](roles.md)
+ - [Protecting MSSP intellectual property in Azure Sentinel](mssp-protect-intellectual-property.md)
+ - [Data collection best practices](best-practices-data.md)
+ - [Threat intelligence integration in Azure Sentinel](threat-intelligence-integration.md)
+ - [Audit Azure Sentinel queries and activities](audit-sentinel-data.md)
+
+- **Analysts**:
+
+ - [Recommended playbooks](automate-responses-with-playbooks.md#recommended-playbooks)
+ - [Handle false positives in Azure Sentinel](false-positives.md)
+ - [Hunt for threats with Azure Sentinel](hunting.md)
+ - [Commonly used Azure Sentinel workbooks](top-workbooks.md)
+ - [Detect threats out-of-the-box](tutorial-detect-threats-built-in.md)
+ - [Create custom analytics rules to detect threats](tutorial-detect-threats-custom.md)
+ - [Use Jupyter Notebook to hunt for security threats](notebooks.md)
+
+## Next steps
+
+To get started with Azure Sentinel, see:
+
+- [On-board Azure Sentinel](quickstart-onboard.md)
+- [Get visibility into alerts](quickstart-get-visibility.md)
sentinel Connect Azure Security Center https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-azure-security-center.md
As Azure Defender itself is enabled per subscription, the Azure Defender connect
1. You can select whether you want the alerts from Azure Defender to automatically generate incidents in Azure Sentinel. Under **Create incidents**, select **Enabled** to turn on the default analytics rule that automatically [creates incidents from alerts](create-incidents-from-alerts.md). You can then edit this rule under **Analytics**, in the **Active rules** tab.
+ > [!TIP]
+ > When configuring [custom analytics rules](tutorial-detect-threats-custom.md) for alerts from Azure Defender, consider the alert severity to avoid opening incidents for informational alerts.
+ >
+ > Informational alerts in Azure Security Center don't represent a security risk on their own, and are relevant only in the context of an existing, open incident. For more information, see [Security alerts and incidents in Azure Security Center](/azure/security-center/security-center-alerts-overview).
+ >
+
+ ## Find and analyze your data > [!NOTE]
sentinel Connect Cef Verify https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cef-verify.md
# STEP 3: Validate connectivity
-Once you have deployed your log forwarder (in Step 1) and configured your security solution to send it CEF messages (in Step 2), follow these instructions to verify connectivity between your security solution and Azure Sentinel.
+Once you have deployed your log forwarder (in Step 1) and configured your security solution to send it CEF messages (in Step 2), follow these instructions to verify connectivity between your security solution and Azure Sentinel.
## Prerequisites
Use the `python ΓÇôversion` command to check.
## How to validate connectivity 1. From the Azure Sentinel navigation menu, open **Logs**. Run a query using the **CommonSecurityLog** schema to see if you are receiving logs from your security solution.<br>
-Be aware that it may take about 20 minutes until your logs start to appear in **Log Analytics**.
+Be aware that it may take about 20 minutes until your logs start to appear in **Log Analytics**.
-1. If you don't see any results from the query, verify that events are being generated from your security solution, or try generating some, and verify they are being forwarded to the Syslog forwarder machine you designated.
+1. If you don't see any results from the query, verify that events are being generated from your security solution, or try generating some, and verify they are being forwarded to the Syslog forwarder machine you designated.
1. Run the following script on the log forwarder (applying the Workspace ID in place of the placeholder) to check connectivity between your security solution, the log forwarder, and Azure Sentinel. This script checks that the daemon is listening on the correct ports, that the forwarding is properly configured, and that nothing is blocking communication between the daemon and the Log Analytics agent. It also sends mock messages 'TestCommonEventFormat' to check end-to-end connectivity. <br>
The validation script performs the following checks:
</filter> ```
-1. Checks that the parsing for Cisco ASA Firewall events is configured as expected, using the following command:
+1. Checks that the parsing for Cisco ASA Firewall events is configured as expected, using the following command:
```bash grep -i "return ident if ident.include?('%ASA')" /opt/microsoft/omsagent/plugin/security_lib.rb ``` - <a name="parsing-command"></a>If there is an issue with the parsing, the script will produce an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command will ensure the correct parsing and restart the agent.
-
+ ```bash # Cisco ASA parsing fix sed -i "s|return '%ASA' if ident.include?('%ASA')|return ident if ident.include?('%ASA')|g" /opt/microsoft/omsagent/plugin/security_lib.rb && sudo /opt/microsoft/omsagent/bin/service_control restart [workspaceID] ```
-1. Checks that the *Computer* field in the syslog source is properly mapped in the Log Analytics agent, using the following command:
+1. Checks that the *Computer* field in the syslog source is properly mapped in the Log Analytics agent, using the following command:
```bash grep -i "'Host' => record\['host'\]" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb
The validation script performs the following checks:
- Configuration file: `/etc/rsyslog.d/security-config-omsagent.conf` ```bash
- if $rawmsg contains "CEF:" or $rawmsg contains "ASA-" then @@127.0.0.1:25226
+ if $rawmsg contains "CEF:" or $rawmsg contains "ASA-" then @@127.0.0.1:25226
``` 1. Restarts the syslog daemon and the Log Analytics agent:
The validation script performs the following checks:
</filter> ```
-1. Checks that the parsing for Cisco ASA Firewall events is configured as expected, using the following command:
+1. Checks that the parsing for Cisco ASA Firewall events is configured as expected, using the following command:
```bash grep -i "return ident if ident.include?('%ASA')" /opt/microsoft/omsagent/plugin/security_lib.rb ``` - <a name="parsing-command"></a>If there is an issue with the parsing, the script will produce an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command will ensure the correct parsing and restart the agent.
-
+ ```bash # Cisco ASA parsing fix sed -i "s|return '%ASA' if ident.include?('%ASA')|return ident if ident.include?('%ASA')|g" /opt/microsoft/omsagent/plugin/security_lib.rb && sudo /opt/microsoft/omsagent/bin/service_control restart [workspaceID] ```
-1. Checks that the *Computer* field in the syslog source is properly mapped in the Log Analytics agent, using the following command:
+1. Checks that the *Computer* field in the syslog source is properly mapped in the Log Analytics agent, using the following command:
```bash grep -i "'Host' => record\['host'\]" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb
The validation script performs the following checks:
``` + ## Next steps In this document, you learned how to connect CEF appliances to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
In this document, you learned how to connect CEF appliances to Azure Sentinel. T
- Learn about [CEF and CommonSecurityLog field mapping](cef-name-mapping.md). - Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md). - Get started [detecting threats with Azure Sentinel](./tutorial-detect-threats-built-in.md).-- [Use workbooks](tutorial-monitor-your-data.md) to monitor your data.
+- [Use workbooks](tutorial-monitor-your-data.md) to monitor your data.
sentinel Connect Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-data-sources.md
To find the support contact information for a data connector:
The **Supported by** field has a support contact link you can use to access support and maintenance for the selected data connector. ++ ## Next steps - To get started with Azure Sentinel, you need a subscription to Microsoft Azure. If you don't have a subscription, you can sign up for a [free trial](https://azure.microsoft.com/free/).
sentinel Connect Syslog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-syslog.md
Title: Connect Syslog data to Azure Sentinel | Microsoft Docs
-description: Connect any machine or appliance that supports Syslog to Azure Sentinel by using an agent on a Linux machine between the appliance and Azure Sentinel. 
+description: Connect any machine or appliance that supports Syslog to Azure Sentinel by using an agent on a Linux machine between the appliance and Azure Sentinel.
documentationcenter: na
In this document, you learned how to connect Syslog on-premises appliances to Az
- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md). - [Use workbooks](tutorial-monitor-your-data.md) to monitor your data.
+{"mode":"full","isActive":false}
sentinel Connect Windows Security Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-windows-security-events.md
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
-The Windows Security Events connector lets you stream security events from any Windows server (physical or virtual, on-premises or in any cloud) connected to your Azure Sentinel workspace. This enables you to view Windows security events in your dashboards, to use them in creating custom alerts, and to rely on them to improve your investigations, giving you more insight into your organization's network and expanding your security operations capabilities.
+The Windows Security Events connector lets you stream security events from any Windows server (physical or virtual, on-premises or in any cloud) connected to your Azure Sentinel workspace. This enables you to view Windows security events in your dashboards, to use them in creating custom alerts, and to rely on them to improve your investigations, giving you more insight into your organization's network and expanding your security operations capabilities.
+
+## Connector options
+
+The Windows Security Events connector supports the following versions:
+
+|Connector version |Description |
+|||
+|**Security events** |Legacy version, based on the Log Analytics Agent, and sometimes known as the Microsoft Monitoring Agent (MMA) or the OMS agent. <br><br>Limited to 10,000 events per second. To ensure optimal performance, make sure to keep to 8,500 events per second or fewer. |
+|**Windows Security Events** |Newer version, currently in preview, and based on the Azure Monitor Agent (AMA.) <br><br>Supports additional features, such as pre-ingestion log filtering and individual data collection rules for certain groups of machines. |
+| | |
-There are now two versions of this connector: **Security events** is the legacy version, based on the Log Analytics Agent (sometimes known as the MMA or OMS agent), and **Windows Security Events** is the new version, currently in **preview** and based on the new Azure Monitor Agent (AMA). This document presents information on both connectors. You can choose from the tabs below to see the information relevant to your chosen connector.
> [!NOTE]
-> To collect security events from any system that is not an Azure virtual machine, the system must have [**Azure Arc**](../azure-monitor/agents/azure-monitor-agent-install.md) installed and enabled *before* you enable either of these connectors.
+> The MMA for Linux does not support multi-homing, which sends logs to multiple workspaces. If you require multi-homing, we recommend that you use the **Windows Security Events** connector.
+
+> [!TIP]
+> If you need multiple agents, you may want to use a virtual machine scale that's set to run multiple agents for log ingestion, or use several machines. Both the Security events and Windows Security events connector can then be used with a load balancer to ensure that the machines are not overloaded, and to prevent data duplication.
>
-> This includes:
-> - Windows servers installed on physical machines
-> - Windows servers installed on on-premises virtual machines
-> - Windows servers installed on virtual machines in non-Azure clouds
+
+This article presents information on both versions of the connector. Select from the tabs below to view the information relevant to your selected connector.
+ # [Log Analytics Agent (Legacy)](#tab/LAA)
This document shows you how to create data collection rules.
> The Azure Monitor agent can coexist with the existing agents, so you can continue to use the legacy connector during evaluation or migration. This is particularly important while the new connector is in preview,due to the limited support for existing solutions. You should be careful though in collecting duplicate data since this could skew query results and result in additional charges for data ingestion and retention.
+## Collect security events from non-Azure machines
+
+To collect security events from any system that is not an Azure virtual machine, the system must have [**Azure Arc**](../azure-monitor/agents/azure-monitor-agent-install.md) installed and enabled *before* you enable either of these connectors.
+
+This includes:
+- Windows servers installed on physical machines
+- Windows servers installed on on-premises virtual machines
+- Windows servers installed on virtual machines in non-Azure clouds
## Set up the Windows Security Events connector
You'll see all your data collection rules (including those created through the A
### Create data collection rules using the API
-You can also create data collection rules using the API ([see schema](/rest/api/monitor/data-collection-rules)), which can make life easier if you're creating a lot of rules (if you're an MSSP, for example). Here's an example you can use as a template for creating a rule:
+You can also create data collection rules using the API ([see schema](/rest/api/monitor/data-collection-rules)), which can make life easier if you're creating many rules (if you're an MSSP, for example). Here's an example you can use as a template for creating a rule:
**Request URL and header**
sentinel Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/customer-managed-keys.md
This article provides background information and steps to configure a [customer-
## How CMK works
-The Azure Sentinel solution uses several storage resources for log collection and features, including a Log Analytics dedicated cluster. As part of the Azure Sentinel CMK configuration, you will have to configure the CMK settings on the related Log Analytics dedicated cluster. Data saved by Azure Sentinel in storage resources other than Log Analytics will also be encrypted using the customer managed key configured for the dedicated Log Analytics cluster.
+The Azure Sentinel solution uses several storage resources for log collection and features, including a Log Analytics dedicated cluster. As part of the Azure Sentinel CMK configuration, you will have to configure the CMK settings on the related Log Analytics dedicated cluster. Data saved by Azure Sentinel in storage resources other than Log Analytics will also be encrypted using the customer-managed key configured for the dedicated Log Analytics cluster.
See the following additional relevant documentation: - [Azure Monitor customer-managed keys (CMK)](../azure-monitor/logs/customer-managed-keys.md).
sentinel Entities Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/entities-reference.md
Weak identifiers of a host entity:
| -- | - | -- | | Type | String | ΓÇÿipΓÇÖ | | Address | String | The IP address as string, e.g. 127.0.0.1 (either in IPv4 or IPv6). |
-| Location | GeoLocation | The geo-location context attached to the IP entity. |
+| Location | GeoLocation | The geo-location context attached to the IP entity. <br><br>For more information, see also [Enrich entities in Azure Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md). |
| Strong identifiers of an IP entity:
sentinel False Positives https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/false-positives.md
Another option for implementing exceptions is to modify the analytics rule query
To edit existing analytics rules, select **Automation** from the Azure Sentinel left navigation menu. Select the rule you want to edit, and then select **Edit** at lower right to open the **Analytics Rules Wizard**.
-For detailed instructions on using the **Analytics Rules Wizard** to create and edit analytics rules, see [Tutorial: Create custom analytics rules to detect threats](tutorial-detect-threats-custom.md).
+For detailed instructions on using the **Analytics Rules Wizard** to create and edit analytics rules, see [Create custom analytics rules to detect threats](tutorial-detect-threats-custom.md).
To implement an exception in a typical rule preamble, you can add a condition like `where IPAddress !in ('<ip addresses>')` near the beginning of the rule query. This line excludes specific IP addresses from the rule.
let subnets = _GetWatchlist('subnetallowlist');
For more information, see: - [Automate incident handling in Azure Sentinel with automation rules](automate-incident-handling-with-automation-rules.md)-- [Tutorial: Create custom analytics rules to detect threats](tutorial-detect-threats-custom.md)
+- [Create custom analytics rules to detect threats](tutorial-detect-threats-custom.md)
- [Use Azure Sentinel watchlists](watchlists.md)
sentinel Fusion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/fusion.md
This detection is enabled by default in Azure Sentinel. To check the status, or
Since the **Fusion** rule type contains only one rule that can't be modified, rule templates are not applicable for this rule type. > [!NOTE]
-> Azure Sentinel currently uses 30 days of historical data to train the machine learning systems. This data is always encrypted using Microsoft’s keys as it passes through the machine learning pipeline. However, the training data is not encrypted using [Customer Managed Keys (CMK)](customer-managed-keys.md) if you enabled CMK in your Azure Sentinel workspace. To opt out of Fusion, navigate to **Azure Sentinel** \> **Configuration** \> **Analytics \> Active rules \> Advanced Multistage Attack Detection** and in the **Status** column, select **Disable.**
+> Azure Sentinel currently uses 30 days of historical data to train the machine learning systems. This data is always encrypted using Microsoft’s keys as it passes through the machine learning pipeline. However, the training data is not encrypted using [Customer-Managed Keys (CMK)](customer-managed-keys.md) if you enabled CMK in your Azure Sentinel workspace. To opt out of Fusion, navigate to **Azure Sentinel** \> **Configuration** \> **Analytics \> Active rules \> Advanced Multistage Attack Detection** and in the **Status** column, select **Disable.**
### Configure scheduled analytics rules for fusion detections
sentinel Geolocation Data Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/geolocation-data-api.md
+
+ Title: Enrich entities with geolocation data in Azure Sentinel using REST API | Microsoft Docs
+description: This article describes how you can enrich entities in Azure Sentinel with geolocation data via REST API.
+
+documentationcenter: na
++
+editor: ''
+++
+ms.devlang: na
++
+ na
+ Last updated : 07/21/2021+++
+# Enrich entities in Azure Sentinel with geolocation data via REST API (Public preview)
+
+This article shows you how to enrich entities in Azure Sentinel with geolocation data using the REST API.
+
+> [!IMPORTANT]
+> This feature is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Common URI parameters
+
+The following are the common URI parameters for the geolocation API:
++++
+| Name | In | Required | Type | Description |
+|-|-|-|-|-|
+| **{subscriptionId}** | path | yes | GUID | The Azure subscription ID |
+| **{resourceGroupName}** | path | yes | string | The name of the resource group within the subscription |
+| **{api-version}** | query | yes | string | The version of the protocol used to make this request. As of April 30 2021, the geolocation API version is *2019-01-01-preview*.|
+| **{ipAddress}** | query | yes | string | The IP Address for which geolocation information is needed, in an IPv4 or IPv6 format. |
+|
+
+## Enrich IP Address with geolocation information
+
+This command retrieves geolocation data for a given IP Address.
+
+### Request URI
+
+| Method | Request URI |
+|-|-|
+| **GET** | `https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.SecurityInsights/enrichment/ip/geodata/?ipaddress={ipAddress}&api-version={api-version}` |
+|
+
+### Responses
+
+|Status code |Description |
+|||
+|**200** | Success |
+|**400** | IP address not provided or is in invalid format |
+|**404** | Geolocation data not found for this IP address |
+|**429** | Too many requests, try again in the specified timeframe |
+| | |
+
+### Fields returned in the response
+
+|Field name |Description |
+|||
+|**ASN** | The autonomous system number associated with this IP address |
+|**carrier** | The name of the carrier for this IP address |
+|**city** | The city where this IP address is located |
+|**cityCf** | A numeric rating of confidence that the value in the 'city' field is correct, on a scale of 0-100 |
+|**continent** | The continent where this IP address is located |
+|**country** |The county where this IP address is located |
+|**countryCf** | A numeric rating of confidence that the value in the 'country' field is correct on a scale of 0-100 |
+|**ipAddr** | The dotted-decimal or colon-separated string representation of the IP address |
+|**ipRoutingType** | A description of the connection type for this IP address |
+|**latitude** | The latitude of this IP address |
+|**longitude** | The longitude of this IP address |
+|**organization** | The name of the organization for this IP address |
+|**organizationType** | The type of the organization for this IP address |
+|**region** | The geographic region where this IP address is located |
+|**state** | The state where this IP address is located |
+|**stateCf** | A numeric rating of confidence that the value in the 'state' field is correct on a scale of 0-100 |
+|**stateCode** | The abbreviated name for the state where this IP address is located |
+| | |
++
+## Throttling limits for the API
+
+This API has a limit of 100 calls, per user, per hour.
+
+### Sample response
+
+```rest
+"body":
+{
+ "asn": "12345",
+ "carrier": "Microsoft",
+ "city": "Redmond",
+ "cityCf": 90,
+ "continent": "north america",
+ "country": "united states",
+ "countryCf": 99
+ "ipAddr": "1.2.3.4",
+ "ipRoutingType": "fixed",
+ "latitude": "40.2436",
+ "longitude": "-100.8891",
+ "organization": "Microsoft",
+ "organizationType": "tech",
+ "region": "western usa",
+ "state": "washington",
+ "stateCf": null
+ "stateCode": "wa"
+}
+```
+
+## Next steps
+
+To learn more about Azure Sentinel, see the following articles:
+
+- Learn more about entities:
+
+ - [Azure Sentinel entity types reference](entities-reference.md)
+ - [Classify and analyze data using entities in Azure Sentinel](entities-in-azure-sentinel.md)
+ - [Map data fields to entities in Azure Sentinel](map-data-fields-to-entities.md)
+
+- Explore other uses of the [Azure Sentinel API](/rest/api/securityinsights/)
sentinel Hunting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/hunting.md
> [!IMPORTANT] >
-> - The cross-resource query experience and upgrades to the **hunting dashboard** (see marked items below) are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> The cross-resource query experience and upgrades to the **hunting dashboard** (see marked items below) are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
> [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] As security analysts and investigators, you want to be proactive about looking for security threats, but your various systems and security appliances generate mountains of data that can be difficult to parse and filter into meaningful events. Azure Sentinel has powerful hunting search and query tools to hunt for security threats across your organization's data sources. To help security analysts look proactively for new anomalies that weren't detected by your security apps or even by your scheduled analytics rules, Azure Sentinel's built-in hunting queries guide you into asking the right questions to find issues in the data you already have on your network.
-For example, one built-in query provides data about the most uncommon processes running on your infrastructure. You wouldn't want an alert about each time they are run - they could be entirely innocent - but you might want to take a look at the query on occasion to see if there's anything unusual.
+For example, one built-in query provides data about the most uncommon processes running on your infrastructure. You wouldn't want an alert about each time they are run - they could be entirely innocent - but you might want to take a look at the query on occasion to see if there's anything unusual.
-With Azure Sentinel hunting, you can take advantage of the following capabilities:
+## Use built-in queries
-- **Built-in queries**: The main hunting page, accessible from the Azure Sentinel navigation menu, provides ready-made query examples designed to get you started and get you familiar with the tables and the query language. These built-in hunting queries are developed by Microsoft security researchers on a continuous basis, both adding new queries and fine-tuning existing queries to provide you with an entry point to look for new detections and figure out where to start hunting for the beginnings of new attacks.
+The [hunting dashboard](#use-the-hunting-dashboard-public-preview) provides ready-made query examples designed to get you started and get you familiar with the tables and the query language. Queries run on data stored in log tables, such as for process creation, DNS events, or other event types.
-- **Powerful query language with IntelliSense**: Hunting queries are built in [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/), a query language that gives you the power and flexibility you need to take hunting to the next level. It's the same language used by the queries in your analytics rules and elsewhere in Azure Sentinel.
+Built-in hunting queries are developed by Microsoft security researchers on a continuous basis, both adding new queries and fine-tuning existing queries to provide you with an entry point to look for new detections and figure out where to start hunting for the beginnings of new attacks.
-- **Hunting dashboard (preview)**: This upgrade of the main page lets you run all your queries, or a selected subset, in a single click. Identify where to start hunting by looking at result count, spikes, or the change in result count over a 24-hour period. You can also sort and filter by favorites, data source, MITRE ATT&CK tactic or technique, results, or results delta. View the queries that do not yet have the necessary data sources connected, and get recommendations on how to enable these queries.
+Use queries before, during, and after a compromise to take the following actions:
-- **Create your own bookmarks**: During the hunting process, you may come across query results that may look unusual or suspicious. You can "bookmark" these items - saving them and putting them aside so you can refer back to them in the future. You can use your bookmarked items to create or enrich an incident for investigation. For more information about bookmarks, see [Use bookmarks in hunting](bookmarks.md).
+- **Before an incident occurs**: Waiting on detections is not enough. Take proactive action by running any threat-hunting queries related to the data you're ingesting into your workspace at least once a week.
-- **Use notebooks to power investigation**: Notebooks give you a kind of virtual sandbox environment, complete with its own kernel. You can use notebooks to enhance your hunting and investigations with machine learning, visualization, and data analysis. You can carry out a complete investigation in a notebook, encapsulating the raw data, the code you run on it, the results, and their visualizations, and save the whole thing so that it can be shared with and reused by others in your organization.
+ Results from your proactive hunting provide early insight into events that may confirm that a compromise is in process, or will at least show weaker areas in your environment that are at risk and need attention.
-- **Query the stored data**: The data is accessible in tables for you to query. For example, you can query process creation, DNS events, and many other event types.
+- **During a compromise**: Use [livestream](livestream.md) to run a specific query constantly, presenting results as they come in. Use livestream when you need to actively monitor user events, such as if you need to verify whether a specific compromise is still taking place, to help determine a threat actor's next action, and towards the end of an investigation to confirm that the compromise is indeed over.
-- **Query data stored in Azure Data Explorer (preview)**: You can create hunting and livestream queries over data stored in Azure Data Explorer. For further information, see details of [constructing cross-resource queries](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md) in the Azure Monitor documentation.
+- **After a compromise**: After a compromise or an incident has occurred, make sure to improve your coverage and insight to prevent similar incidents in the future.
-- **Links to community**: Leverage the power of the greater community to find additional queries and data sources.
-
-## Get started hunting
+ - Modify your existing queries or create new ones to assist with early detection, based on insights you've gained from your compromise or incident.
-In the Azure Sentinel portal, click **Hunting**.
+ - If you've discovered or created a hunting query that provides high value insights into possible attacks, create custom detection rules based on that query and surface those insights as alerts to your security incident responders.
- :::image type="content" source="media/hunting/hunting-start.png" alt-text="Azure Sentinel starts hunting" lightbox="media/hunting/hunting-start.png":::
+ View the query's results, and select **New alert rule** > **Create Azure Sentinel alert**. Use the **Analytics rule wizard** to create a new rule based on your query. For more information, see [Create custom analytics rules to detect threats](tutorial-detect-threats-custom.md).
-- When you open the **Hunting** page, all the hunting queries are displayed in a single table. The table lists all the queries written by Microsoft's team of security analysts as well as any additional query you created or modified. Each query provides a description of what it hunts for, and what kind of data it runs on. These templates are grouped by their various tactics - the icons on the right categorize the type of threat, such as initial access, persistence, and exfiltration. -- (Preview) To see how the queries apply to your environment, click the **Run all queries (Preview)** button, or select a subset of queries using the check boxes to the left of each row and select the **Run selected queries (Preview)** button. Executing the queries can take anywhere from a few seconds to many minutes, depending on how many queries are selected, the time range, and the amount of data that is being queried.
+> [!TIP]
+> - Now in public preview, you can also create hunting and livestream queries over data stored in Azure Data Explorer. For more information, see details of [constructing cross-resource queries](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md) in the Azure Monitor documentation.
+>
+> - Use community resources, such as the [Azure Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Hunting%20Queries) to find additional queries and data sources.
+>
-- (Preview) Once your queries are done running, you can see which queries returned results using the **Results** filter. You can then sort to see which queries had the most or fewest results. You can also see which queries are not active in your environment by selecting *N/A* in the **Results** filter. Hover over the info icon (i) next to the *N/A* to see which data sources are required to make this query active.
+## Use the hunting dashboard (Public preview)
-- (Preview) You can identify spikes in the data by sorting or filtering on **Results delta**. This compares the results of the last 24 hours against the results of the previous 24-48 hours to make it easy to see large differences in volume.
+The hunting dashboard enables you to run all your queries, or a selected subset, in a single selection. In the Azure Sentinel portal, select **Hunting**.
-- (Preview) The **MITRE ATT&CK tactic bar**, at the top of the table, lists how many queries are mapped to each MITRE ATT&CK tactic. The tactic bar gets dynamically updated based on the current set of filters applied. This is an easy way to see which MITRE ATT&CK tactics show up when you filter by a given result count, a high result delta, *N/A* results, or any other set of filters.
+The table shown lists all the queries written by Microsoft's team of security analysts and any extra query you created or modified. Each query provides a description of what it hunts for, and what kind of data it runs on. These templates are grouped by their various tactics - the icons on the right categorize the type of threat, such as initial access, persistence, and exfiltration.
-- (Preview) Queries can also be mapped to MITRE ATT&CK techniques. You can filter or sort by MITRE ATT&CK techniques using the **Technique** filter. By opening a query, you will be able to click on the technique to see the MITRE ATT&CK description of the technique. -- You can save any query to your favorites. Queries saved to your favorites automatically run each time the **Hunting** page is accessed. You can create your own hunting query or clone and customize an existing hunting query template.
-
-- By clicking **Run Query** in the hunting query details page, you can run any query without leaving the hunting page. The number of matches is displayed within the table, in the **Results** column. Review the list of hunting queries and their matches.
+Use the hunting dashboard to identify where to start hunting, by looking at result count, spikes, or the change in result count over a 24-hour period. Sort and filter by favorites, data source, MITRE ATT&CK tactic or technique, results, or results delta. View queries that still need data sources connected**, and get recommendations on how to enable these queries.
-- You can perform a quick review of the underlying query in the query details pane. You can see the results by clicking the **View query results** link (below the query window) or the **View Results** button (at the bottom of the pane). The query will open in the **Logs** (Log Analytics) blade, and below the query, you can review the matches for the query.
+The following table describes detailed actions available from the hunting dashboard:
-- To preserve suspicious or interesting findings from a query in Log Analytics, mark the check boxes of the rows you wish to preserve and select **Add bookmark**. This creates for each marked row a record - a bookmark - that contains the row results, the query that created the results, and entity mappings to extract users, hosts, and IP addresses. You can add your own tags (see below) and notes to each bookmark.
+|Action |Description |
+|||
+|**See how queries apply to your environment** | Select the **Run all queries (Preview)** button, or select a subset of queries using the check boxes to the left of each row and select the **Run selected queries (Preview)** button. <br><br>Running your queries can take anywhere from a few seconds to many minutes, depending on how many queries are selected, the time range, and the amount of data that is being queried. |
+|**View the queries that returned results** | After your queries are done running, view the queries that returned results using the **Results** filter: <br>- Sort to see which queries had the most or fewest results. <br>- View the queries that are not at all active in your environment by selecting *N/A* in the **Results** filter. <br>- Hover over the info icon (**i**) next to the *N/A* to see which data sources are required to make this query active. |
+|**Identify spikes in your data** | Identify spikes in the data by sorting or filtering on **Results delta**. <br><br>This compares the results of the last 24 hours against the results of the previous 24-48 hours, highlighting any large differences in volume. |
+|**View queries mapped to the MITRE Att&CK tactic** | The **MITRE ATT&CK tactic bar**, at the top of the table, lists how many queries are mapped to each MITRE ATT&CK tactic. The tactic bar gets dynamically updated based on the current set of filters applied. <br><br>This enables you to see which MITRE ATT&CK tactics show up when you filter by a given result count, a high result delta, *N/A* results, or any other set of filters. |
+|**View queries mapped to MITRE ATT&CK techniques** | Queries can also be mapped to MITRE ATT&CK techniques. You can filter or sort by MITRE ATT&CK techniques using the **Technique** filter. By opening a query, you will be able to select the technique to see the MITRE ATT&CK description of the technique. |
+|**Save a query to your favorites** | Queries saved to your favorites automatically run each time the **Hunting** page is accessed. You can create your own hunting query or clone and customize an existing hunting query template. |
+|**Run queries** | Select **Run Query** in the hunting query details page to run the query directly from the hunting page. The number of matches is displayed within the table, in the **Results** column. Review the list of hunting queries and their matches. |
+|**Review an underlying query** | Perform a quick review of the underlying query in the query details pane. You can see the results by clicking the **View query results** link (below the query window) or the **View Results** button (at the bottom of the pane). The query will open in the **Logs** (Log Analytics) blade, and below the query, you can review the matches for the query. |
+| | |
-- You can see all the bookmarked findings by clicking on the **Bookmarks** tab in the main **Hunting** page. You can add tags to bookmarks to classify them for filtering. For example, if you're investigating an attack campaign, you can create a tag for the campaign, apply the tag to any relevant bookmarks, and then filter all the bookmarks based on the campaign. -- You can investigate a single bookmarked finding by selecting the bookmark and then clicking **Investigate** in the details pane to open the investigation experience. You can also create an incident from one or more bookmarks, or add one or more bookmarks to an existing incident, by marking the check boxes to the left of the desired bookmarks and then selecting either **Create new incident** or **Add to existing incident** from the **Incident actions** drop-down menu near the top of the screen. You can then triage and investigate the incident like any other.
+## Create your own bookmarks
-- Having discovered or created a hunting query that provides high value insights into possible attacks, you can create custom detection rules based on that query and surface those insights as alerts to your security incident responders. View the query's results in Log Analytics (see above), then click the **New alert rule** button at the top of the pane and select **Create Azure Sentinel alert**. The **Analytics rule wizard** will open. Complete the required steps as explained in [Tutorial: Create custom analytics rules to detect threats](tutorial-detect-threats-custom.md).
+During the hunting and investigation process, you may come across query results that may look unusual or suspicious. Bookmark these items refer back to them in the future, such as when creating or enriching an incident for investigation.
-## Query language
+- In your results, mark the checkboxes for any rows you want to preserve, and select **Add bookmark**. This creates for a record for each marked row - a bookmark - that contains the row results, the query that created the results, and entity mappings to extract users, hosts, and IP addresses. You can add your own tags and notes to each bookmark.
-Hunting in Azure Sentinel is based on Kusto query language. For more information on the query language and supported operators, see [Query Language Reference](../azure-monitor/logs/get-started-queries.md).
+- View all the bookmarked findings by clicking on the **Bookmarks** tab in the main **Hunting** page. Add tags to bookmarks to classify them for filtering. For example, if you're investigating an attack campaign, you can create a tag for the campaign, apply the tag to any relevant bookmarks, and then filter all the bookmarks based on the campaign.
-## Public hunting query GitHub repository
+- Investigate a single bookmarked finding by selecting the bookmark and then clicking **Investigate** in the details pane to open the investigation experience.
-Check out the [Hunting query repository](https://github.com/Azure/Azure-Sentinel/tree/master/Hunting%20Queries). Contribute and use example queries shared by our customers.
+ You can also create an incident from one or more bookmarks or add one or more bookmarks to an existing incident. Select a checkbox to the left of any bookmarks you want to use, and then select **Incident actions** > **Create new incident** or **Add to existing incident**. Triage and investigate the incident like any other.
- ## Sample query
-A typical query starts with a table name followed by a series of operators separated by \|.
+> [!TIP]
+> Bookmarks stand to represent key events that are noteworthy and should be escalated to incidents if they are severe enough to warrant an investigation. Events such as potential root causes, indicators of compromise, or other notable events should be raised as a bookmark.
+>
-In the example above, start with the table name SecurityEvent and add piped elements as needed.
+For more information, see [Use bookmarks in hunting](bookmarks.md).
-1. Define a time filter to review only records from the previous seven days.
+## Use notebooks to power investigations
-1. Add a filter in the query to only show event ID 4688.
+Notebooks give you a kind of virtual sandbox environment, complete with its own kernel. You can use notebooks to enhance your hunting and investigations with machine learning, visualization, and data analysis. You can carry out a complete investigation in a notebook, encapsulating the raw data, the code you run on it, the results, and their visualizations, and save the whole thing so that it can be shared with and reused by others in your organization.
-1. Add a filter in the query on the CommandLine to contain only instances of cscript.exe.
-
-1. Project only the columns you're interested in exploring and limit the results to 1000 and click **Run query**.
+For more information, see [Use Jupyter Notebook to hunt for security threats](notebooks.md).
-1. Click the green triangle and run the query. You can test the query and run it to look for anomalous behavior.
## Useful operators and functions
-The query language is powerful and has many available operators, some useful ones of which are listed here:
+Hunting queries are built in [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/), a powerful query language with IntelliSense language that gives you the power and flexibility you need to take hunting to the next level.
+
+It's the same language used by the queries in your analytics rules and elsewhere in Azure Sentinel. For more information, see [Query Language Reference](../azure-monitor/logs/get-started-queries.md).
-**where** - Filter a table to the subset of rows that satisfy a predicate.
+The following operators are especially helpful in Azure Sentinel hunting queries:
-**summarize** - Produce a table that aggregates the content of the input table.
+- **where** - Filter a table to the subset of rows that satisfy a predicate.
-**join** - Merge the rows of two tables to form a new table by matching values of the specified column(s) from each table.
+- **summarize** - Produce a table that aggregates the content of the input table.
-**count** - Return the number of records in the input record set.
+- **join** - Merge the rows of two tables to form a new table by matching values of the specified column(s) from each table.
-**top** - Return the first N records sorted by the specified columns.
+- **count** - Return the number of records in the input record set.
-**limit** - Return up to the specified number of rows.
+- **top** - Return the first N records sorted by the specified columns.
-**project** - Select the columns to include, rename or drop, and insert new computed columns.
+- **limit** - Return up to the specified number of rows.
-**extend** - Create calculated columns and append them to the result set.
+- **project** - Select the columns to include, rename or drop, and insert new computed columns.
-**makeset** - Return a dynamic (JSON) array of the set of distinct values that Expr takes in the group
+- **extend** - Create calculated columns and append them to the result set.
-**find** - Find rows that match a predicate across a set of tables.
+- **makeset** - Return a dynamic (JSON) array of the set of distinct values that Expr takes in the group
-**adx() (preview)** - This function performs cross-resource queries of Azure Data Explorer data sources from the Azure Sentinel hunting experience and Log Analytics. For further information see [Cross-resource query Azure Data Explorer by using Azure Monitor](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md).
+- **find** - Find rows that match a predicate across a set of tables.
+
+- **adx() (preview)** - This function performs cross-resource queries of Azure Data Explorer data sources from the Azure Sentinel hunting experience and Log Analytics. For more information, see [Cross-resource query Azure Data Explorer by using Azure Monitor](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md).
## Save a query
-You can create or modify a query and save it as your own query or share it with users who are in the same tenant.
+Create or modify a query and save it as your own query or share it with users who are in the same tenant.
:::image type="content" source="./media/hunting/save-query.png" alt-text="Save query" lightbox="./media/hunting/save-query.png":::
-### Create a new hunting query
+**To create a new query**:
-1. Click **New query**.
+1. Select **New query**.
1. Fill in all the blank fields and select **Create**. :::image type="content" source="./media/hunting/new-query.png" alt-text="New query" lightbox="./media/hunting/new-query.png":::
-### Clone and modify an existing hunting query
+**To clone and modify an existing query**:
1. Select the hunting query in the table you want to modify.
You can create or modify a query and save it as your own query or share it with
:::image type="content" source="./media/hunting/custom-query.png" alt-text="Custom query" lightbox="./media/hunting/custom-query.png"::: ++
+## Sample query
+
+A typical query starts with a table name followed by a series of operators separated by \|.
+
+In the example above, start with the table name SecurityEvent and add piped elements as needed.
+
+1. Define a time filter to review only records from the previous seven days.
+
+1. Add a filter in the query to only show event ID 4688.
+
+1. Add a filter in the query on the CommandLine to contain only instances of cscript.exe.
+
+1. Project only the columns you're interested in exploring and limit the results to 1000 and select **Run query**.
+
+1. Select the green triangle and run the query. You can test the query and run it to look for anomalous behavior.
+ ## Next steps In this article, you learned how to run a hunting investigation with Azure Sentinel.
sentinel Identify Threats With Entity Behavior Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/identify-threats-with-entity-behavior-analytics.md
Entity pages consist of three parts:
- The right-side panel presents behavioral insights on the entity. These insights help to quickly identify anomalies and security threats. The insights are developed by Microsoft security research teams, and are based on anomaly detection models. > [!NOTE]
-> The **IP address entity page** (now in preview) contains **geolocation data** supplied by the **Microsoft Threat Intelligence service**. This service combines geolocation data from Microsoft solutions and third-party vendors and partners. The data is then available for analysis and investigation in the context of a security incident.
+> The **IP address entity page** (now in preview) contains **geolocation data** supplied by the **Microsoft Threat Intelligence service**. This service combines geolocation data from Microsoft solutions and third-party vendors and partners. The data is then available for analysis and investigation in the context of a security incident. For more information, see also [Enrich entities in Azure Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md).
### The timeline
sentinel Mssp Protect Intellectual Property https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/mssp-protect-intellectual-property.md
For more information, see:
- [Azure Sentinel Technical Playbook for MSSPs](https://cloudpartners.transform.microsoft.com/download?assetname=assets/Azure-Sentinel-Technical-Playbook-for-MSSPs.pdf&download=1) - [Manage multiple tenants in Azure Sentinel as an MSSP](multiple-tenants-service-providers.md) - [Extend Azure Sentinel across workspaces and tenants](extend-sentinel-across-workspaces-tenants.md)-- [Tutorial: Visualize and monitor your data](tutorial-monitor-your-data.md)
+- [Visualize and monitor your data](tutorial-monitor-your-data.md)
- [Tutorial: Set up automated threat responses in Azure Sentinel](tutorial-respond-threats-playbook.md)
sentinel Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/notebooks.md
Last updated 06/24/2021
The foundation of Azure Sentinel is the data store; it combines high-performance querying, dynamic schema, and scales to massive data volumes. The Azure portal and all Azure Sentinel tools use a common API to access this data store. The same API is also available for external tools such as [Jupyter](https://jupyter.org/) notebooks and Python. While many common tasks can be carried out in the portal, Jupyter extends the scope of what you can do with this data. It combines full programmability with a huge collection of libraries for machine learning, visualization, and data analysis. These attributes make Jupyter a compelling tool for security investigation and hunting.
-![example notebook](./media/notebooks/sentinel-notebooks-map.png)
+For example, use notebooks to:
+
+- Perform analytics that aren't built-in to Azure Sentinel, such as some Python machine learning features
+- Create data visualizations that aren't built-in to Azure Sentinel, such as custom timelines and process trees
+- Integrate data sources outside of Azure Sentinel, such as an on-premises data set.
We've integrated the Jupyter experience into the Azure portal, making it easy for you to create and run notebooks to analyze your data. The *Kqlmagic* library provides the glue that lets you take queries from Azure Sentinel and run them directly inside a notebook. Queries use the [Kusto Query Language](https://kusto.azurewebsites.net/docs/kusto/query/https://docsupdatetracker.net/index.html). Several notebooks, developed by some of Microsoft's security analysts, are packaged with Azure Sentinel. Some of these notebooks are built for a specific scenario and can be used as-is. Others are intended as samples to illustrate techniques and features that you can copy or adapt for use in your own notebooks. Other notebooks may also be imported from the Azure Sentinel Community GitHub. The integrated Jupyter experience uses [Azure Notebooks](https://notebooks.azure.com/) to store, share, and execute notebooks. You can also run these notebooks locally if you have a Python environment and Jupyter on your computer, or in other JupyterHub environments such as Azure Databricks.
-Notebooks have two components:
+## Notebook components
+
+Notebooks have two components:
- **The browser-based interface**, where you enter and run queries and code, and where the results of the execution are displayed. - **A *kernel*** that is responsible for parsing and executing the code itself. The Azure Sentinel notebook's kernel runs on an Azure virtual machine (VM). Several licensing options exist to leverage more powerful virtual machines if your notebooks include complex machine learning models.
-The Azure Sentinel notebooks use many popular Python libraries such as pandas, matplotlib, bokeh, and others. There are a great many other Python packages for you to choose from, covering areas such as:
+The Azure Sentinel notebooks use many popular Python libraries such as *pandas*, *matplotlib*, *bokeh*, and others. There are a great many other Python packages for you to choose from, covering areas such as:
- Visualizations and graphics - Data processing and analysis
We've also released some open-source Jupyter security tools in a package named [
The [Azure Sentinel Community GitHub repository](https://github.com/Azure/Azure-Sentinel) is the location for any future Azure Sentinel notebooks built by Microsoft or contributed from the community.
-To use the notebooks, you must first have the right permissions, depending on your user role.
## Manage access to Azure Sentinel notebooks
+To use the notebooks, you must first have the right permissions, depending on your user role.
+ As Azure Sentinel notebooks run on [Azure Machine Learning](../machine-learning/overview-what-is-azure-ml.md) (Azure ML) platform, you must have appropriate access to both Azure Sentinel workspace and an [Azure ML workspace](../machine-learning/concept-workspace.md). |Permission |Description |
sentinel Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/prerequisites.md
+
+ Title: Prerequisites for deploying Azure Sentinel
+description: Learn about pre-deployment activities and prerequisites for deploying Azure Sentinel.
++++++ Last updated : 07/21/2021++
+# Pre-deployment activities and prerequisites for deploying Azure Sentinel
+
+This article describes the pre-deployment activities, prerequisites, and architectural best practices for deploying Azure Sentinel.
+
+## Pre-deployment activities
+
+Before deploying Azure Sentinel, we recommend taking the following steps to help focus your deployment on providing maximum value, as soon as possible.
+
+1. Determine which [data sources](connect-data-sources.md) you need and the data size requirements to help you accurately project your deployment's budget and timeline.
+
+ You might determine this information during your business use case review, or by evaluating a current SIEM that you already have in place. If you already have a SIEM in place, analyze your data to understand which data sources provide the most value and should be ingested into Azure Sentinel.
+
+1. After the business use cases, data sources, and data size requirements have been identified, [start planning your budget](azure-sentinel-billing.md). Use a budget for your Azure Sentinel workspace to help ensure a smooth deployment, without stalls or unplanned costs. Your budget should cover the cost of:
+
+ - Data ingestion for both Azure Sentinel and Azure Log Analytics
+ - Playbooks that will be deployed
+ - Any [long-term retention solutions](store-logs-in-azure-data-explorer.md) you may have planned.
+
+1. Nominate an engineer or architect lead the deployment, based on requirements and timelines. This individual should lead the deployment and be the main point of contact on your team.
+
+## Azure tenant requirements
+
+Before deploying Azure Sentinel, make sure that your Azure tenant has the following requirements:
+
+- An [Azure Active Directory license and tenant](/azure/active-directory/develop/quickstart-create-new-tenant), or an [individual account with a valid payment method](https://azure.microsoft.com/en-us/free/), are required to access Azure and deploy resources.
+
+- After you have a tenant, you must have an [Azure subscription](/azure/cost-management-billing/manage/create-subscription) to track resource creation and billing.
+
+- After you have a subscription, you'll need the [relevant permissions](/azure/role-based-access-control/) to begin using your subscription. If you are using a new subscription, an admin or higher from the AAD tenant should be designated as the [owner/contributor](/azure/role-based-access-control/rbac-and-directory-admin-roles) for the subscription.
+
+ - To maintain the least privileged access available, assign roles at the level of the resource group.
+ - For more control over permissions and access, set up custom roles. For more information, see [Role-based access control](/azure/role-based-access-control/custom-roles).
+ - For extra separation between users and security users, you might want to use [resource-context](resource-context-rbac.md) or [table-level RBAC](https://techcommunity.microsoft.com/t5/azure-sentinel/table-level-rbac-in-azure-sentinel/ba-p/965043).
+
+ For more information about other roles and permissions supported for Azure Sentinel, see [Permissions in Azure Sentinel](roles.md).
+
+- A [Log Analytics workspace](/azure/azure-monitor/learn/quick-create-workspace) is required to house all of the data that Azure Sentinel will be ingesting and using for its detections, analytics, and other features. For more information, see [Workspace best practices](#workspace-best-practices).
+
+> [!TIP]
+> When setting up your Azure Sentinel workspace, [create a resource group](/azure/azure-resource-manager/management/manage-resource-groups-portal) that's dedicated to Azure Sentinel and the resources that Azure Sentinel users including the Log Analytics workspace, any playbooks, workbooks, and so on.
+>
+> A dedicated resource group allows for permissions to be assigned once, at the resource group level, with permissions automatically applied to any relevant resources. Managing access via a resource group helps to ensure that you're using Azure Sentinel efficiently without potentially issuing improper permissions. Without a resource group for Azure Sentinel, where resources are scattered among multiple resource groups, a user or service principal may find themselves unable to perform a required action or view data due to insufficient permissions.
+>
+> To implement more access control to resources by tiers, use extra resource groups to house the resources that should be accessed only by those groups. Using multiple tiers of resource groups enables you to separate access between those tiers.
+>
+
+## Workspace best practices
+
+Use the following best practice guidance when creating the Log Analytics workspace you'll use for Azure Sentinel:
+
+- **When naming your workspace**, include *Azure Sentinel* or some other indicator in the name, so that it's easily identified among your other workspaces.
+
+- **Use the same workspace for both Azure Sentinel and Azure Security Center**, so that all logs collected by Azure Security Center can also be ingested and used by Azure Sentinel. The default workspace created by Azure Security Center will not appear as an available workspace for Azure Sentinel.
+
+- **Use a dedicated workspace cluster if your projected data ingestion is around or more than 1 TB per day**. A [dedicated cluster](/azure/azure-monitor/logs/logs-dedicated-clusters) enables you to secure resources for your Azure Sentinel data, which enables better query performance for large data sets. Dedicated clusters also provide the option for more encryption and control of your organization's keys.
+
+- **Use a single workspace, unless you have a specific need for multiple tenants and workspaces**. Most Azure Sentinel features operate by using a single workspace per Azure Sentinel instance.
+
+ Keep in mind that Azure Sentinel ingests all logs housed within the workspace. Therefore, if you have both security-related and non-security logs, or logs that should not be ingested by Azure Sentinel, create an extra workspace to store the non-Azure Sentinel logs and avoid unwanted costs.
+
+ The following image shows an architecture where security and non-security logs go to separate workspaces, with Azure Sentinel ingesting only the security-related logs.
+
+ :::image type="content" source="media/best-practices/separate-workspaces-for-different-logs.png" alt-text="Separate workspaces for security-related logs and non-security logs.":::
+
+### Multiple tenants and working across workspaces
+
+If you are using Azure Sentinel across multiple tenants, such as if you're a managed security service provider (MSSP), use [Azure Lighthouse](/azure/lighthouse/how-to/onboard-customer) to help manage multiple Azure Sentinel instances in different tenants.
+
+- To reference data that's held in other Azure Sentinel workspaces, such as in [cross-workspace workbooks](extend-sentinel-across-workspaces-tenants.md#cross-workspace-workbooks), use [cross-workspace queries](extend-sentinel-across-workspaces-tenants.md).
+- To simplify incident management and investigation, [condense and list all incidents from each Azure Sentinel instance in a single location](multiple-workspace-view.md).
+
+The best time to use cross-workspace queries is when valuable information is stored in a different workspace, subscription or tenant, and can provide value to your current action. For example, the following code shows a sample cross-workspace query:
+
+```Kusto
+union Update, workspace("contosoretail-it").Update, workspace("WORKSPACE ID").Update
+| where TimeGenerated >= ago(1h)
+| where UpdateState == "Needed"
+| summarize dcount(Computer) by Classification
+```
+
+For more information, see [Protecting MSSP intellectual property in Azure Sentinel](mssp-protect-intellectual-property.md).
+
+### Working with multiple regions
+
+If you are deploying Azure Sentinel in multiple regions, consider the following best practice recommendations:
+
+- Use templates for your analytics rules, custom queries, workbooks, and other resources to make your deployments more efficient. Deploy the templates instead of manually deploying each resource in each region.
+
+- Use separate Azure Sentinel instances for each region. While Azure Sentinel can be used in multiple regions, you may have requirements to separate data by team, region, or site, or regulations and controls that make multi-region models impossible or more complex than needed.
+
+ Using separate instances and workspaces for each region helps to avoid bandwidth / egress costs for moving data across regions.
+
+For more information, see [Data residency in Azure](https://azure.microsoft.com/en-us/global-infrastructure/data-residency/).
+
+## Consider cost and data retention plans
+
+When structuring your Azure Sentinel instance, consider Azure Sentinel's cost and billing structure.
+
+For more information, see:
+
+- [Azure Sentinel costs and billing](azure-sentinel-billing.md)
+- [Azure Sentinel pricing](https://azure.microsoft.com/en-us/pricing/details/azure-sentinel/)
+- [Log Analytics pricing](https://azure.microsoft.com/en-us/pricing/details/monitor/)
+- [Logic apps (playbooks) pricing](https://azure.microsoft.com/en-us/pricing/details/logic-apps/)
+- [Integrating Azure Data Explorer for long-term log retention](store-logs-in-azure-data-explorer.md)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+>[On-board Azure Sentinel](quickstart-onboard.md)
+
+> [!div class="nextstepaction"]
+>[Get visibility into alerts](quickstart-get-visibility.md)
sentinel Quickstart Onboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/quickstart-onboard.md
To on-board Azure Sentinel, you first need to enable Azure Sentinel, and then co
After you connect your data sources, choose from a gallery of expertly created workbooks that surface insights based on your data. These workbooks can be easily customized to your needs. >[!IMPORTANT]
-> For information about the charges incurred when using Azure Sentinel, see [Azure Sentinel pricing](https://azure.microsoft.com/pricing/details/azure-sentinel/).
+> For information about the charges incurred when using Azure Sentinel, see [Azure Sentinel pricing](https://azure.microsoft.com/pricing/details/azure-sentinel/) and [Azure Sentinel costs and billing](azure-sentinel-billing.md).
## Global prerequisites
For example, if you select the **Azure Active Directory** data source, which let
After your data sources are connected, your data starts streaming into Azure Sentinel and is ready for you to start working with. You can view the logs in the [built-in workbooks](quickstart-get-visibility.md) and start building queries in Log Analytics to [investigate the data](tutorial-investigate-cases.md). ## Next steps
-In this document, you learned about onboarding and connecting data sources to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
-- Learn how to [get visibility into your data, and potential threats](quickstart-get-visibility.md).-- Get started [detecting threats with Azure Sentinel](tutorial-detect-threats-built-in.md).-- Stream data from [Common Event Format appliances](connect-common-event-format.md) into Azure Sentinel.+
+For more information, see:
+
+- **Alternate deployment options**:
+
+ - [Deploy Azure Sentinel via API](/rest/api/securityinsights/)
+ - [Deploy Azure Sentinel via PowerShell](https://www.powershellgallery.com/packages/Az.SecurityInsights/0.1.0)
+ - [Deploy Azure Sentinel via ARM template](https://techcommunity.microsoft.com/t5/azure-sentinel/azure-sentinel-all-in-one-accelerator/ba-p/1807933)
+
+- **Get started**:
+ - [Get started with Azure Sentinel](quickstart-get-visibility.md)
+ - [Create custom analytics rules to detect threats](tutorial-detect-threats-custom.md)
+ - [Connect your external solution using Common Event Format](connect-common-event-format.md)
+
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-solution-log-reference.md
Previously updated : 05/12/2021 Last updated : 07/21/2021
This article is intended for advanced SAP users.
- **Log purpose**: Records the progress of an application execution so that you can reconstruct it later as needed.
- Available by using RFC with a custom service based on standard services of XBP interface.
+ Available by using RFC with a custom service based on standard services of XBP interface. This log is generated per client.
### ABAPAppLog_CL log schema
This article is intended for advanced SAP users.
- Other entities in the SAP system, such as user data, roles, addresses.
- Available by using RFC with a custom service based on standard services.
+ Available by using RFC with a custom service based on standard services. This log is generated per client.
### ABAPChangeDocsLog_CL log schema
This article is intended for advanced SAP users.
- **Log purpose**: Includes the Change & Transport System (CTS) logs, including the directory objects and customizations where changes were made.
- Available by using RFC with a custom service based on standard tables and standard services.
+ Available by using RFC with a custom service based on standard tables and standard services. This log is generated with data across all clients.
> [!NOTE] > In addition to application logging, change documents, and table recording, all changes that you make to your production system using the Change & Transport System are documented in the CTS and TMS logs.
This article is intended for advanced SAP users.
- **Log purpose**: Provides logging for those tables that are critical or susceptible to audits.
- Available by using RFC with a custom service.
+ Available by using RFC with a custom service. This log is generated with data across all clients.
### ABAPTableDataLog_CL log schema
This article is intended for advanced SAP users.
- **Related SAP documentation**: [SAP Help Portal](https://help.sap.com/viewer/62b4de4187cb43668d15dac48fc00732/7.5.7/en-US/48b2a710ca1c3079e10000000a42189b.html) -- **Log purpose**: Monitors Gateway activities. Available by the SAP Control Web Service.
+- **Log purpose**: Monitors Gateway activities. Available by the SAP Control Web Service. This log is generated with data across all clients.
### GW_CL log schema
This article is intended for advanced SAP users.
- **Log purpose**: Records inbound and outbound requests and compiles statistics of the HTTP requests.
- Available by the SAP Control Web Service.
+ Available by the SAP Control Web Service. This log is generated with data across all clients.
### ICM_CL log schema
This article is intended for advanced SAP users.
- **Log purpose**: Combines all background processing job logs (SM37).
- Available by using RFC with a custom service based on standard services of XBP interfaces.
+ Available by using RFC with a custom service based on standard services of XBP interfaces. This log is generated with data across all clients.
### ABAPJobLog_CL log schema
This article is intended for advanced SAP users.
- Information that provides a higher level of data, such as successful and unsuccessful sign-in attempts - Information that enables the reconstruction of a series of events, such as successful or unsuccessful transaction starts
- Available by using RFC XAL/SAL interfaces. SAL is available starting from version Basis 7.50.
+ Available by using RFC XAL/SAL interfaces. SAL is available starting from version Basis 7.50. This log is generated with data across all clients.
### ABAPAuditLog_CL log schema
This article is intended for advanced SAP users.
- **Log purpose**: Serves as the main log for SAP Printing with the history of spool requests. (SP01).
- Available by using RFC with a custom service based on standard tables.
+ Available by using RFC with a custom service based on standard tables. This log is generated with data across all clients.
### ABAPSpoolLog_CL log schema
This article is intended for advanced SAP users.
- **Log purpose**: Serves as the main log for SAP Printing with the history of spool output requests. (SP02).
- Available by using RFC with a custom service based on standard tables.
+ Available by using RFC with a custom service based on standard tables. This log is generated with data across all clients.
### ABAPSpoolOutputLog_CL log schema
This article is intended for advanced SAP users.
- **Log purpose**: Records all SAP NetWeaver Application Server (SAP NetWeaver AS) ABAP system errors, warnings, user locks because of failed sign-in attempts from known users, and process messages.
- Available by the SAP Control Web Service.
+ Available by the SAP Control Web Service. This log is generated with data across all clients.
### SysLog_CL log schema
This article is intended for advanced SAP users.
For example, unmapped business processes may be simple release or approval procedures, or more complex business processes such as creating base material and then coordinating the associated departments.
- Available by using RFC with a custom service based on standard tables and standard services.
+ Available by using RFC with a custom service based on standard tables and standard services. This log is generated per client.
### ABAPWorkflowLog_CL log schema
This article is intended for advanced SAP users.
- **Log purpose**: Combines all work process logs. (default: `dev_*`).
- Available by the SAP Control Web Service.
+ Available by the SAP Control Web Service. This log is generated with data across all clients.
### WP_CL log schema
This article is intended for advanced SAP users.
- **Log purpose**: Records user actions, or attempted actions in the SAP HANA database. For example, enables you to log and monitor read access to sensitive data.
- Available by the Sentinel Linux Agent for Syslog.
+ Available by the Sentinel Linux Agent for Syslog. This log is generated with data across all clients.
### Syslog log schema
This article is intended for advanced SAP users.
- **Log purpose**: Combines all Java files-based logs, including the Security Audit Log, and System (cluster and server process), Performance, and Gateway logs. Also includes Developer Traces and Default Trace logs.
- Available by the SAP Control Web Service.
+ Available by the SAP Control Web Service. This log is generated with data across all clients.
### JavaFilesLogsCL log schema
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-solution-security-content.md
For example:
:::image type="content" source="media/sap/sap-workbook.png" alt-text="SAP - System Applications and Products workbook.":::
-For more information, see [Tutorial: Visualize and monitor your data](tutorial-monitor-your-data.md).
+For more information, see [Visualize and monitor your data](tutorial-monitor-your-data.md).
## Built-in analytics rules
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/threat-intelligence-integration.md
Azure Sentinel gives you a few different ways to [use threat intelligence feeds](work-with-threat-indicators.md) to enhance your security analysts' ability to detect and prioritize known threats.
-You can use one of many available integrated threat intelligence platform (TIP) products, you can connect to TAXII servers to take advantage of any STIX-compatible threat intelligence source, and you can also make use of any custom solutions that can communicate directly with the [Microsoft Graph Security tiIndicators API](/graph/api/resources/tiindicator).
+You can use one of many available integrated [threat intelligence platform (TIP) products](connect-threat-intelligence-tip.md), you can [connect to TAXII servers](connect-threat-intelligence-taxii.md) to take advantage of any STIX-compatible threat intelligence source, and you can also make use of any custom solutions that can communicate directly with the [Microsoft Graph Security tiIndicators API](/graph/api/resources/tiindicator).
You can also connect to threat intelligence sources from playbooks, in order to enrich incidents with TI information that can help direct investigation and response actions.
+> [!TIP]
+> If you have multiple workspaces in the same tenant, such as for [Managed Service Providers (MSSPs)](mssp-protect-intellectual-property.md), it may be more cost effective to connect threat indicators only to the centralized workspace.
+>
+> When you have the same set of threat indicators imported into each separate workspace, you can run cross-workspace queries to aggregate threat indicators across your workspaces. Correlate them within your MSSP incident detection, investigation, and hunting experience.
+>
+ ## TAXII threat intelligence feeds To connect to TAXII threat intelligence feeds, follow the instructions to [connect Azure Sentinel to STIX/TAXII threat intelligence feeds](connect-threat-intelligence-taxii.md), together with the data supplied by each vendor linked below. You may need to contact the vendor directly to obtain the necessary data to use with the connector.
sentinel Top Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/top-workbooks.md
The following table lists the most commonly used, built-in Azure Sentinel workbooks.
-Access workbooks in Azure Sentinel under **Threat Management** > **Workbooks** on the left, and then search for the workbook you want to use. For more information, see [Tutorial: Visualize and monitor your data](tutorial-monitor-your-data.md).
+Access workbooks in Azure Sentinel under **Threat Management** > **Workbooks** on the left, and then search for the workbook you want to use. For more information, see [Visualize and monitor your data](tutorial-monitor-your-data.md).
+
+> [!TIP]
+> We recommend deploying any workbooks associated with the data you're ingesting. Workbooks allow for broader monitoring and investigating based on your collected data.
+>
+> For more information, see [Connect data sources](connect-data-sources.md) and [Discover and deploy Azure Sentinel solutions](sentinel-solutions-deploy.md).
+>
|Workbook name |Description | ||| |**Analytics Efficiency** | Provides insights into the efficacy of your analytics rules to help you achieve better SOC performance. <br><br>For more information, see [The Toolkit for Data-Driven SOCs](https://techcommunity.microsoft.com/t5/azure-sentinel/the-toolkit-for-data-driven-socs/ba-p/2143152).| |**Azure Activity** | Provides extensive insight into your organization's Azure activity by analyzing and correlating all user operations and events. <br><br>For more information, see [Auditing with Azure Activity logs](audit-sentinel-data.md#auditing-with-azure-activity-logs). | |**Azure AD Audit logs** | Uses Azure Active Directory audit logs to provide insights into Azure AD scenarios. <br><br>For more information, see [Quickstart: Get started with Azure Sentinel](quickstart-get-visibility.md). |
-|**Azure AD Audit, Activity and Sign-in logs** | Provides insights into Azure Active Directory Audit, Activity, and Sign-in data with one workbook. This workbook can be used by both Security and Azure administrators. |
+|**Azure AD Audit, Activity and Sign-in logs** | Provides insights into Azure Active Directory Audit, Activity, and Sign-in data with one workbook. Shows activity such as sign-ins by location, device, failure reason, user action, and more. <br><br> This workbook can be used by both Security and Azure administrators. |
|**Azure AD Sign-in logs** | Uses the Azure AD sign-in logs to provide insights into Azure AD scenarios. | |**Cybersecurity Maturity Model Certification (CMMC)** | Provides a mechanism for viewing log queries aligned to CMMC controls across the Microsoft portfolio, including Microsoft security offerings, Office 365, Teams, Intune, Windows Virtual Desktop, and so on. <br><br>For more information, see [Cybersecurity Maturity Model Certification (CMMC) Workbook in Public Preview](https://techcommunity.microsoft.com/t5/azure-sentinel/what-s-new-cybersecurity-maturity-model-certification-cmmc/ba-p/2111184).|
-|**Data collection health monitoring** | Provides insights into your workspace's data ingestion status. View monitors and detect anomalies to help you determine your workspaces data collection health. <br><br>For more information, see [Monitor the health of your data connectors with this Azure Sentinel workbook](monitor-data-connector-health.md). |
+|**Data collection health monitoring** / **Usage monitoring** | Provides insights into your workspace's data ingestion status, such as ingestion size, latency, and number of logs per source. View monitors and detect anomalies to help you determine your workspaces data collection health. <br><br>For more information, see [Monitor the health of your data connectors with this Azure Sentinel workbook](monitor-data-connector-health.md). |
|**Event Analyzer** | Enables you to explore, audit, and speed up Windows Event Log analysis, including all event details and attributes, such as security, application, system, setup, directory service, DNS, and so on. | |**Exchange Online** |Provides insights into Microsoft Exchange online by tracing and analyzing all Exchange operations and user activities. | |**Identity & Access** | Provides insight into identity and access operations in Microsoft product usage, via security logs that include audit and sign-in logs. | |**Incident Overview** | Designed to help with triage and investigation by providing in-depth information about an incident, including general information, entity data, triage time, mitigation time, and comments. <br><br>For more information, see [The Toolkit for Data-Driven SOCs](https://techcommunity.microsoft.com/t5/azure-sentinel/the-toolkit-for-data-driven-socs/ba-p/2143152). |
-|**Investigation Insights** | Provides analysts with insight into incident, bookmark, and entity data. Common queries and detailed visualizations can help analysts investigate suspicious activities. |
+|<a name="investigation-insights"></a>**Investigation Insights** | Provides analysts with insight into incident, bookmark, and entity data. Common queries and detailed visualizations can help analysts investigate suspicious activities. |
|**Microsoft Cloud App Security - discovery logs** | Provides details about the cloud apps that are used in your organization, and insights from usage trends and drill-down data for specific users and applications. <br><br>For more information, see [Connect data from Microsoft Cloud App Security](connect-cloud-app-security.md).| |**MITRE ATT&CK Workbook** | Provides details about MITRE ATT&CK coverage for Azure Sentinel. |
-|**Office 365** | Provides insights into Office 365 by tracing and analyzing all operations and activities. Drill down into SharePoint, OneDrive, and Exchange data. |
+|**Office 365** | Provides insights into Office 365 by tracing and analyzing all operations and activities. Drill down into SharePoint, OneDrive, Teams, and Exchange data. |
|**Security Alerts** | Provides a Security Alerts dashboard for alerts in your Azure Sentinel environment. <br><br>For more information, see [Automatically create incidents from Microsoft security alerts](create-incidents-from-alerts.md). | |**Security Operations Efficiency** | Intended for security operations center (SOC) managers to view overall efficiency metrics and measures regarding the performance of their team. <br><br>For more information, see [Manage your SOC better with incident metrics](manage-soc-with-incident-metrics.md). | |**Threat Intelligence** | Provides insights into threat indicators, including type and severity of threats, threat activity over time, and correlation with other data sources, including Office 365 and firewalls. <br><br>For more information, see [Understand threat intelligence in Azure Sentinel](understand-threat-intelligence.md). |
sentinel Tutorial Detect Threats Built In https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/tutorial-detect-threats-built-in.md
editor: ''
ms.devlang: na-+ na Last updated 05/11/2021
-# Tutorial: Detect threats out-of-the-box
+# Detect threats out-of-the-box
-Once you have [connected your data sources](quickstart-onboard.md) to Azure Sentinel, you'll want to be notified when something suspicious occurs. That's why Azure Sentinel provides out-of-the-box, built-in templates to help you create threat detection rules. These templates were designed by Microsoft's team of security experts and analysts based on known threats, common attack vectors, and suspicious activity escalation chains. Rules created from these templates will automatically search across your environment for any activity that looks suspicious. Many of the templates can be customized to search for activities, or filter them out, according to your needs. The alerts generated by these rules will create incidents that you can assign and investigate in your environment.
+After you've [connected your data sources](quickstart-onboard.md) to Azure Sentinel, you'll want to be notified when something suspicious occurs. That's why Azure Sentinel provides out-of-the-box, built-in templates to help you create threat detection rules.
-This tutorial helps you detect threats with Azure Sentinel:
+Rule templates were designed by Microsoft's team of security experts and analysts based on known threats, common attack vectors, and suspicious activity escalation chains. Rules created from these templates will automatically search across your environment for any activity that looks suspicious. Many of the templates can be customized to search for activities, or filter them out, according to your needs. The alerts generated by these rules will create incidents that you can assign and investigate in your environment.
+
+This article helps you understand how to detect threats with Azure Sentinel:
> [!div class="checklist"] > * Use out-of-the-box threat detections > * Automate threat responses
-## About out-of-the-box detections
-
-To view all the out-of-the-box detections, go to **Analytics** and then **Rule templates**. This tab contains all the Azure Sentinel built-in rules.
-
- :::image type="content" source="media/tutorial-detect-built-in/view-oob-detections.png" alt-text="Use built-in detections to find threats with Azure Sentinel":::
-
-The following sections describe the types of out-of-the-box templates available:
-
-### Microsoft security
+## View built-in detections
-Microsoft security templates automatically create Azure Sentinel incidents from the alerts generated in other Microsoft security solutions, in real time. You can use Microsoft security rules as a template to create new rules with similar logic.
+To view all analytics rules and detections in Azure Sentinel, go to **Analytics** > **Rule templates**. This tab contains all the Azure Sentinel built-in rules.
-For more information about security rules, see [Automatically create incidents from Microsoft security alerts](create-incidents-from-alerts.md).
-### Fusion
+Built-in detections include:
-Based on Fusion technology, advanced multistage attack detection in Azure Sentinel uses scalable machine learning algorithms that can correlate many low-fidelity alerts and events across multiple products into high-fidelity and actionable incidents. Fusion is enabled by default. Because the logic is hidden and therefore not customizable, you can only create one rule with this template.
+|Rule type |Description |
+|||
+|**Microsoft security** | Microsoft security templates automatically create Azure Sentinel incidents from the alerts generated in other Microsoft security solutions, in real time. You can use Microsoft security rules as a template to create new rules with similar logic. <br><br>For more information about security rules, see [Automatically create incidents from Microsoft security alerts](create-incidents-from-alerts.md). |
+|**Fusion** | Based on Fusion technology, advanced multistage attack detection in Azure Sentinel uses scalable machine learning algorithms that can correlate many low-fidelity alerts and events across multiple products into high-fidelity and actionable incidents. Fusion is enabled by default. Because the logic is hidden and therefore not customizable, you can only create one rule with this template. <br><br>The Fusion engine can also correlate alerts produced by [scheduled analytics rules](#scheduled) with those from other systems, producing high-fidelity incidents as a result. |
+|**Machine learning (ML) behavioral analytics** | ML behavioral analytics templates are based on proprietary Microsoft machine learning algorithms, so you cannot see the internal logic of how they work and when they run. <br><br>Because the logic is hidden and therefore not customizable, you can only create one rule with each template of this type.|
+|<a name="anomaly"></a>**Anomaly** | Anomaly rule templates use SOC-ML (machine learning) to detect specific types of anomalous behavior. Each rule has its own unique parameters and thresholds, appropriate to the behavior being analyzed. <br><br>While these rule configurations can't be changed or fine-tuned, you can duplicate the rule, change and fine-tune the duplicate. In such cases, run the duplicate in **Flighting** mode and the original concurrently in **Production** mode. Then compare results, and switch the duplicate to **Production** if and when its fine-tuning is to your liking. <br><br>For more information, see [Use SOC-ML anomalies to detect threats in Azure Sentinel](soc-ml-anomalies.md) and [Work with anomaly detection analytics rules in Azure Sentinel](work-with-anomaly-rules.md). |
+| <a name="scheduled"></a>**Scheduled** | Scheduled analytics rules are based on built-in queries written by Microsoft security experts. You can see the query logic and make changes to it. You can use the scheduled rules template and customize the query logic and scheduling settings to create new rules. <br><br>Several new scheduled analytics rule templates produce alerts that are correlated by the Fusion engine with alerts from other systems to produce high-fidelity incidents. For more information, see [Advanced multistage attack detection](fusion.md#configure-scheduled-analytics-rules-for-fusion-detections).<br><br>**Tip**: Rule scheduling options include configuring the rule to run every specified number of minutes, hours, or days, with the clock starting when you enable the rule. <br><br>We recommend being mindful of when you enable a new or edited analytics rule to ensure that the rules will get the new stack of incidents in time. For example, you might want to run a rule in synch with when your SOC analysts begin their workday, and enable the rules then.|
+| | |
> [!IMPORTANT]
-> Some of the detections in the Fusion rule template are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> - Some of the detections in the Fusion rule template are currently in **PREVIEW**. To see which detections are in preview, see [Advanced multistage attack detection in Azure Sentinel](fusion.md).
>
-> To see which detections are in preview, see [Advanced multistage attack detection in Azure Sentinel](fusion.md).
-
-In addition, the Fusion engine can now correlate alerts produced by [scheduled analytics rules](#scheduled) with those from other systems, producing high-fidelity incidents as a result.
-
-### Machine learning (ML) behavioral analytics
-
-These templates are based on proprietary Microsoft machine learning algorithms, so you cannot see the internal logic of how they work and when they run. Because the logic is hidden and therefore not customizable, you can only create one rule with each template of this type.
-
-> [!IMPORTANT]
-> - The machine learning behavioral analytics rule templates are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> - The Anomaly rule templates are currently in **PREVIEW**.
>
-> - By creating and enabling any rules based on the ML behavior analytics templates, **you give Microsoft permission to copy ingested data outside of your Azure Sentinel workspace's geography** as necessary for processing by the machine learning engines and models.
-
-### Anomaly
-
-Anomaly rule templates use SOC-ML (machine learning) to detect specific types of anomalous behavior. Each rule has its own unique parameters and thresholds, appropriate to the behavior being analyzed, and while its configuration can't be changed or fine-tuned, you can duplicate the rule, change and fine-tune the duplicate, run the duplicate in **Flighting** mode and the original concurrently in **Production** mode, compare results, and switch the duplicate to **Production** if and when its fine-tuning is to your liking. Learn more about [SOC-ML](soc-ml-anomalies.md) and [working with anomaly rules](work-with-anomaly-rules.md).
-
-> [!IMPORTANT]
-> The Anomaly rule templates are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-### Scheduled
+> - The machine learning behavioral analytics rule templates are currently in **PREVIEW**. By creating and enabling any rules based on the ML behavior analytics templates, **you give Microsoft permission to copy ingested data outside of your Azure Sentinel workspace's geography** as necessary for processing by the machine learning engines and models.
+>
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-Scheduled analytics rules are based on built-in queries written by Microsoft security experts. You can see the query logic and make changes to it. You can use the scheduled rules template and customize the query logic and scheduling settings to create new rules.
+## Use built-in analytics rules
-Several new scheduled analytics rule templates produce alerts that are correlated by the Fusion engine with alerts from other systems to produce high-fidelity incidents. See [Advanced multistage attack detection](fusion.md#configure-scheduled-analytics-rules-for-fusion-detections) for details.
+This procedure describes how to use built-in analytics rules templates.
-> [!TIP]
-> Rule scheduling options include configuring the rule to run every specified number of minutes, hours, or days, with the clock starting when you enable the rule.
->
-> We recommend being mindful of when you enable a new or edited analytics rule to ensure that the rules will get the new stack of incidents in time. For example, you might want to run a rule in synch with when your SOC analysts begin their workday, and enable the rules then.
->
+**To use built-in analytics rules**:
-## Use out-of-the-box detections
+1. In the Azure Sentinel > **Analytics** > **Rule templates** page, select a template name, and then select the **Create rule** button on the details pane to create a new active rule based on that template.
-1. In order to use a built-in template, click the template name, and then click the **Create rule** button on the details pane to create a new active rule based on that template. Each template has a list of required data sources. When you open the template, the data sources are automatically checked for availability. If there is an availability issue, the **Create rule** button may be disabled, or you may see a warning to that effect.
+ Each template has a list of required data sources. When you open the template, the data sources are automatically checked for availability. If there is an availability issue, the **Create rule** button may be disabled, or you may see a warning to that effect.
:::image type="content" source="media/tutorial-detect-built-in/use-built-in-template.png" alt-text="Detection rule preview panel":::
-1. Clicking the **Create rule** button opens the rule creation wizard based on the selected template. All the details are autofilled, and with the **Scheduled** or **Microsoft security** templates, you can customize the logic and other rule settings to better suit your specific needs. You can repeat this process to create additional rules based on the built-in template. After following the steps in the rule creation wizard to the end, you will have finished creating a rule based on the template. The new rules will appear in the **Active rules** tab.
+1. Selecting **Create rule** opens the rule creation wizard based on the selected template. All the details are autofilled, and with the **Scheduled** or **Microsoft security** templates, you can customize the logic and other rule settings to better suit your specific needs. You can repeat this process to create additional rules based on the built-in template. After following the steps in the rule creation wizard to the end, you will have finished creating a rule based on the template. The new rules will appear in the **Active rules** tab.
- For more details on how to customize your rules in the rule creation wizard, see [Tutorial: Create custom analytics rules to detect threats](tutorial-detect-threats-custom.md).
+ For more details on how to customize your rules in the rule creation wizard, see [Create custom analytics rules to detect threats](tutorial-detect-threats-custom.md).
+> [!TIP]
+> - Make sure that you **enable all rules associated with your connected data sources** in order to ensure full security coverage for your environment. The most efficient way to enable analytics rules is directly from the data connector page, which lists any related rules. For more information, see [Connect data sources](connect-data-sources.md).
+>
+> - You can also **push rules to Azure Sentinel via [API](/rest/api/securityinsights/) and [PowerShell](https://www.powershellgallery.com/packages/Az.SecurityInsights/0.1.0)**, although doing so requires additional effort.
+>
+> When using API or PowerShell, you must first export the rules to JSON before enabling the rules. API or PowerShell may be helpful when enabling rules in multiple instances of Azure Sentinel with identical settings in each instance.
+>
## Export rules to an ARM template You can easily [export your rule to an Azure Resource Manager (ARM) template](import-export-analytics-rules.md) if you want to manage and deploy your rules as code. You can also import rules from template files in order to view and edit them in the user interface. ## Next steps
-In this tutorial, you learned how to get started detecting threats using Azure Sentinel.
+- To create custom rules, use existing rules as templates or references. Using existing rules as a baseline helps by building out most of the logic before you make any changes needed. For more information, see [Create custom analytics rules to detect threats](tutorial-detect-threats-custom.md).
-To learn how to automate your responses to threats, [Set up automated threat responses in Azure Sentinel](tutorial-respond-threats-playbook.md).
+- To learn how to automate your responses to threats, [Set up automated threat responses in Azure Sentinel](tutorial-respond-threats-playbook.md).
sentinel Tutorial Detect Threats Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/tutorial-detect-threats-custom.md
 Title: Create custom analytics rules to detect threats with Azure Sentinel| Microsoft Docs
-description: Use this tutorial to learn how to create custom analytics rules to detect security threats with Azure Sentinel. Take advantage of event grouping, alert grouping, and alert enrichment, and understand AUTO DISABLED.
+ Title: Create custom analytics rules to detect threats with Azure Sentinel | Microsoft Docs
+description: Learn how to create custom analytics rules to detect security threats with Azure Sentinel. Take advantage of event grouping, alert grouping, and alert enrichment, and understand AUTO DISABLED.
documentationcenter: na
editor: ''
ms.devlang: na-+ na Last updated 06/17/2021
-# Tutorial: Create custom analytics rules to detect threats
+# Create custom analytics rules to detect threats
-Now that you've [connected your data sources](quickstart-onboard.md) to Azure Sentinel, you can create custom analytics rules to help you discover threats and anomalous behaviors that are present in your environment. These rules search for specific events or sets of events across your environment, alert you when certain event thresholds or conditions are reached, generate incidents for your SOC to triage and investigate, and respond to threats with automated tracking and remediation processes.
+After [connecting your data sources](quickstart-onboard.md) to Azure Sentinel, create custom analytics rules to help discover threats and anomalous behaviors in your environment.
-This tutorial helps you create custom rules to detect threats with Azure Sentinel.
+Analytics rules search for specific events or sets of events across your environment, alert you when certain event thresholds or conditions are reached, generate incidents for your SOC to triage and investigate, and respond to threats with automated tracking and remediation processes.
+
+> [!TIP]
+> When creating custom rules, use existing rules as templates or references. Using existing rules as a baseline helps by building out most of the logic before you make any changes needed.
+>
-Upon completing this tutorial, you will be able to do the following:
> [!div class="checklist"] > * Create analytics rules > * Define how events and alerts are processed
In the **Set rule logic** tab, you can either write a query directly in the **Ru
``` > [!NOTE]
- > #### Rule query best practices
+ > **Rule query best practices**:
> - The query length should be between 1 and 10,000 characters and cannot contain "`search *`" or "`union *`". You can use [user-defined functions](/azure/data-explorer/kusto/query/functions/user-defined-functions) to overcome the query length limitation. > > - Using ADX functions to create Azure Data Explorer queries inside the Log Analytics query window **is not supported**.
SOC managers should be sure to check the rule list regularly for the presence of
## Next steps
-In this tutorial, you learned how to get started detecting threats using Azure Sentinel.
+When using analytics rules to detect threats from Azure Sentinel, make sure that you enable all rules associated with your connected data sources in order to ensure full security coverage for your environment. The most efficient way to enable analytics rules is directly from the data connector page, which lists any related rules. For more information, see [Connect data sources](connect-data-sources.md).
+
+You can also push rules to Azure Sentinel via [API](/rest/api/securityinsights/) and [PowerShell](https://www.powershellgallery.com/packages/Az.SecurityInsights/0.1.0), although doing so requires additional effort. When using API or PowerShell, you must first export the rules to JSON before enabling the rules. API or PowerShell may be helpful when enabling rules in multiple instances of Azure Sentinel with identical settings in each instance.
+
+For more information, see:
For more information, see:
sentinel Tutorial Investigate Cases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/tutorial-investigate-cases.md
After choosing the appropriate classification, add some descriptive text in the
:::image type="content" source="media/tutorial-investigate-cases/closing-reasons-comment-apply.png" alt-text="{alt-text}"::: + ## Next steps In this tutorial, you learned how to get started investigating incidents using Azure Sentinel. Continue to the tutorial for [how to respond to threats using automated playbooks](tutorial-respond-threats-playbook.md). > [!div class="nextstepaction"]
sentinel Tutorial Monitor Your Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/tutorial-monitor-your-data.md
Title: Visualize your data using Azure Monitor Workbooks in Azure Sentinel | Microsoft Docs
-description: Use this tutorial to learn how to visualize your data using workbooks in Azure Sentinel.
+description: Learn how to visualize your data using workbooks in Azure Sentinel.
documentationcenter: na
editor: ''
ms.devlang: na-+ na Last updated 04/04/2021
-# Tutorial: Visualize and monitor your data
+# Visualize and monitor your data
Once you have [connected your data sources](quickstart-onboard.md) to Azure Sentinel, you can visualize and monitor the data using the Azure Sentinel adoption of Azure Monitor Workbooks, which provides versatility in creating custom dashboards. While the Workbooks are displayed differently in Azure Sentinel, it may be useful for you to see how to [create interactive reports with Azure Monitor Workbooks](../azure-monitor/visualize/workbooks-overview.md). Azure Sentinel allows you to create custom workbooks across your data, and also comes with built-in workbook templates to allow you to quickly gain insights across your data as soon as you connect a data source.
-This tutorial helps you visualize your data in Azure Sentinel.
+This article describes how to visualize your data in Azure Sentinel.
+ > [!div class="checklist"] > * Use built-in workbooks > * Create new workbooks
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/whats-new.md
Previously updated : 07/11/2021 Last updated : 07/21/2021 # What's new in Azure Sentinel
If you're looking for items older than six months, you'll find them in the [Arch
## July 2021
+- [Enrich Entities with geolocation data via API (Public preview)](#enrich-entities-with-geolocation-data-via-api-public-preview)
- [Support for ADX cross-resource queries (Public preview)](#support-for-adx-cross-resource-queries-public-preview) - [Watchlists are in general availability](#watchlists-are-in-general-availability) - [Support for data residency in more geos](#support-for-data-residency-in-more-geos) - [Bidirectional sync in Azure Defender connector (Public preview)](#bidirectional-sync-in-azure-defender-connector-public-preview)
+### Enrich entities with geolocation data via API (Public preview)
+
+Azure Sentinel now offers an API to enrich your data with geolocation information. Geolocation data can then be used to analyze and investigate security incidents.
+
+For more information, see [Enrich entities in Azure Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md) and [Classify and analyze data using entities in Azure Sentinel](entities-in-azure-sentinel.md).
++ ### Support for ADX cross-resource queries (Public preview) The hunting experience in Azure Sentinel now supports [ADX cross-resource queries](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md#cross-query-your-log-analytics-or-application-insights-resources-and-azure-data-explorer).
We know that compliance isnΓÇÖt just an annual requirement, and organizations mu
- Features over 75 control cards, aligned to the TIC 3.0 security capabilities, with selectable GUI buttons for navigation. - Is designed to augment staffing through automation, artificial intelligence, machine learning, query/alerting generation, visualizations, tailored recommendations, and respective documentation references.
-For more information, see [Tutorial: Visualize and monitor your data](tutorial-monitor-your-data.md).
+For more information, see [Visualize and monitor your data](tutorial-monitor-your-data.md).
## April 2021
In each workbook or workbook template, select :::image type="icon" source="media
Intervals are also restarted if you manually refresh the workbook by selecting the :::image type="icon" source="media/whats-new/manual-refresh-button.png" border="false"::: **Refresh** button.
-For more information, see [Tutorial: Visualize and monitor your data](tutorial-monitor-your-data.md) and the [Azure Monitor documentation](../azure-monitor/visualize/workbooks-overview.md).
+For more information, see [Visualize and monitor your data](tutorial-monitor-your-data.md) and the [Azure Monitor documentation](../azure-monitor/visualize/workbooks-overview.md).
### New detections for Azure Firewall
In your workbook, select the options menu > :::image type="icon" source="media/w
:::image type="content" source="media/whats-new/print-workbook.png" alt-text="Print your workbook or save as PDF.":::
-For more information, see [Tutorial: Visualize and monitor your data](tutorial-monitor-your-data.md).
+For more information, see [Visualize and monitor your data](tutorial-monitor-your-data.md).
### Incident filters and sort preferences now saved in your session (Public preview)
Access the CMMC workbook in the Azure Sentinel **Workbooks** area. Select **Temp
For more information, see: - [Azure Sentinel Cybersecurity Maturity Model Certification (CMMC) Workbook](https://techcommunity.microsoft.com/t5/public-sector-blog/azure-sentinel-cybersecurity-maturity-model-certification-cmmc/ba-p/2110524)-- [Tutorial: Visualize and monitor your data](tutorial-monitor-your-data.md)
+- [Visualize and monitor your data](tutorial-monitor-your-data.md)
### Third-party data connectors
The Azure Sentinel Scheduled analytics rule wizard now provides the following en
- Expanded autocomplete support. - Real-time query validations. Errors in your query now show as a red block in the scroll bar, and as a red dot in the **Set rule logic** tab name. Additionally, a query with errors cannot be saved.
-For more information, see [Tutorial: Create custom analytics rules to detect threats](tutorial-detect-threats-custom.md).
+For more information, see [Create custom analytics rules to detect threats](tutorial-detect-threats-custom.md).
### Az.SecurityInsights PowerShell module (Public preview) Azure Sentinel now supports the new [Az.SecurityInsights](https://www.powershellgallery.com/packages/Az.SecurityInsights/) PowerShell module.
site-recovery Hyper V Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/hyper-v-azure-support-matrix.md
This article summarizes the supported components and settings for disaster recov
Hyper-V with Virtual Machine Manager <br> <br>| You can perform disaster recovery to Azure for VMs running on Hyper-V hosts that are managed in the System Center Virtual Machine Manager fabric.<br/><br/> You can deploy this scenario in the Azure portal or by using PowerShell.<br/><br/> When Hyper-V hosts are managed by Virtual Machine Manager, you also can perform disaster recovery to a secondary on-premises site. To learn more about this scenario, read [this tutorial](hyper-v-vmm-disaster-recovery.md). Hyper-V without Virtual Machine Manager | You can perform disaster recovery to Azure for VMs running on Hyper-V hosts that aren't managed by Virtual Machine Manager.<br/><br/> You can deploy this scenario in the Azure portal or by using PowerShell.
+> [!NOTE]
+> Configuring both Azure Backup and Azure Site Recovery on the same Hyper-V host can cause issue with replication and is not supported.
+ ## On-premises servers **Server** | **Requirements** | **Details**
storage Customer Managed Keys Configure Key Vault Hsm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/customer-managed-keys-configure-key-vault-hsm.md
Azure Storage encrypts all data in a storage account at rest. By default, data i
This article shows how to configure encryption with customer-managed keys stored in a managed HSM by using Azure CLI. To learn how to configure encryption with customer-managed keys stored in a key vault, see [Configure encryption with customer-managed keys stored in Azure Key Vault](customer-managed-keys-configure-key-vault.md).
-> [!IMPORTANT]
->
+> [!NOTE]
> Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for configuration. ## Assign an identity to the storage account
storage Customer Managed Keys Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/customer-managed-keys-overview.md
You must use one of the following Azure key stores to store your customer-manage
You can either create your own keys and store them in the key vault or managed HSM, or you can use the Azure Key Vault APIs to generate keys. The storage account and the key vault or managed HSM must be in the same region and in the same Azure Active Directory (Azure AD) tenant, but they can be in different subscriptions.
-> [!IMPORTANT]
->
+> [!NOTE]
> Azure Key Vault and Azure Key Vault Managed HSM support the same APIs and management interfaces for configuration. ## About customer-managed keys
storage Scalability Targets Standard Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/scalability-targets-standard-account.md
Previously updated : 07/21/2021 Last updated : 07/22/2021
## See also - [Scalability targets for the Azure Storage resource provider](../common/scalability-targets-resource-provider.md)-- [Azure subscription limits and quotas](../../azure-resource-manager/management/azure-subscription-service-limits.md)
+- [Azure subscription limits and quotas](../../azure-resource-manager/management/azure-subscription-service-limits.md)
storage Storage Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-analytics.md
The following actions performed by Storage Analytics are billable:
* Requests to create blobs for logging. * Requests to create table entities for metrics.
-If you have configured a data retention policy, you are not charged for delete transactions when Storage Analytics deletes old logging and metrics data. However, delete transactions from a client are billable. For more information about retention policies, see [Setting a Storage Analytics Data Retention Policy](/rest/api/storageservices/Setting-a-Storage-Analytics-Data-Retention-Policy).
+If you have configured a data retention policy, you can reduce the spending by deleting old logging and metrics data. For more information about retention policies, see [Setting a Storage Analytics Data Retention Policy](/rest/api/storageservices/Setting-a-Storage-Analytics-Data-Retention-Policy).
### Understanding billable requests
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/common/storage-redundancy.md
Previously updated : 06/10/2021 Last updated : 07/21/2021
When deciding which redundancy option is best for your scenario, consider the tr
- Whether your application requires read access to the replicated data in the secondary region if the primary region becomes unavailable for any reason > [!NOTE]
-> The features and regional availability described in this article are also available to accounts that have a hierarchical namespace.
+> The features and regional availability described in this article are also available to accounts that have a hierarchical namespace.
## Redundancy in the primary region
The following table indicates whether your data is durable and available in a gi
The following table shows which redundancy options are supported by each Azure Storage service.
-| LRS | ZRS | GRS/RA-GRS | GZRS/RA-GZRS |
-|:-|:-|:-|:-|
-| Blob storage<br />Queue storage<br />Table storage<br />Azure Files<br />Azure managed disks | Blob storage<br />Queue storage<br />Table storage<br />Azure Files | Blob storage<br />Queue storage<br />Table storage<br />Azure Files<br /> | Blob storage<br />Queue storage<br />Table storage<br />Azure Files<br /> |
+| LRS | ZRS | GRS | RA-GRS | GZRS | RA-GZRS |
+|||||||
+| Blob storage <br />Queue storage <br />Table storage <br />Azure Files<sup>1,</sup><sup>2</sup> <br />Azure managed disks | Blob storage <br />Queue storage <br />Table storage <br />Azure Files<sup>1,</sup><sup>2</sup> | Blob storage <br />Queue storage <br />Table storage <br />Azure Files<sup>1</sup> | Blob storage <br />Queue storage <br />Table storage <br /> | Blob storage <br />Queue storage <br />Table storage <br />Azure Files<sup>1</sup> | Blob storage <br />Queue storage <br />Table storage <br /> |
+
+<sup>1</sup> Standard file shares are supported on LRS, ZRS, GRS, and GZRS.<br />
+<sup>2</sup> Premium file shares are supported on LRS and ZRS.<br />
### Supported storage account types
storage Storage Files Enable Smb Multichannel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-enable-smb-multichannel.md
Azure CLI does not yet support configuring SMB Multichannel. See the portal inst
- [Remount your file share](storage-how-to-use-files-windows.md) to take advantage of SMB Multichannel. - [Troubleshoot any issues you have related to SMB Multichannel](storage-troubleshooting-files-performance.md#smb-multichannel-option-not-visible-under-file-share-settings). - To learn more about the improvements, see [SMB Multichannel performance](storage-files-smb-multichannel-performance.md)
+- To learn more about the Windows SMB Multichannel feature, see [Manage SMB Multichannel](/azure-stack/hci/manage/manage-smb-multichannel).
storage Storage Files Identity Auth Active Directory Domain Service Enable https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md
Previously updated : 06/23/2021 Last updated : 07/22/2021
Keep in mind that you can enable Azure AD DS authentication over SMB only after
To enable Azure AD DS authentication over SMB with the [Azure portal](https://portal.azure.com), follow these steps: 1. In the Azure portal, go to your existing storage account, or [create a storage account](../common/storage-account-create.md).
-1. In the **Settings** section, select **Configuration**.
-1. Under **Identity-based access for file shares** switch the toggle for **Azure Active Directory Domain Service (AAD DS)** to **Enabled**.
-1. Select **Save**.
+1. In the **File shares** section, select **Active directory: Not Configured**.
+
+ :::image type="content" source="media/storage-files-active-directory-enable/files-azure-ad-enable-storage-account-identity.png" alt-text="Screenshot of the File shares pane in your storage account, Active directory is highlighted." lightbox="media/storage-files-active-directory-enable/files-azure-ad-enable-storage-account-identity.png":::
-The following image shows how to enable Azure AD DS authentication over SMB for your storage account.
+1. Select **Azure Active Directory Domain Services** then switch the toggle to **Enabled**.
+1. Select **Save**.
+ :::image type="content" source="media/storage-files-active-directory-enable/files-azure-ad-highlight.png" alt-text="Screenshot of the Active Directory pane, Azure Active Directory Domain Services is enabled." lightbox="media/storage-files-active-directory-enable/files-azure-ad-highlight.png":::
# [PowerShell](#tab/azure-powershell)
virtual-desktop Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/remote-app-streaming/security.md
When you host or stream apps on Azure Virtual Desktop, you reach a wide variety
## Shared responsibility
-Before Azure Virtual Desktop, on-premises virtualization solutions like Azure Virtual Desktop require granting users access to roles like Gateway, Broker, Web Access, and so on. These roles had to be fully redundant and able to handle peak capacity. Admins would install these roles as part of the Windows Server OS, and they had to be domain-joined with specific ports accessible to public connections. To keep deployments secure, admins had to constantly make sure everything in the infrastructure was maintained and up-to-date.
+Before Azure Virtual Desktop, on-premises virtualization solutions like Remote Desktop Services require granting users access to roles like Gateway, Broker, Web Access, and so on. These roles had to be fully redundant and able to handle peak capacity. Admins would install these roles as part of the Windows Server OS, and they had to be domain-joined with specific ports accessible to public connections. To keep deployments secure, admins had to constantly make sure everything in the infrastructure was maintained and up-to-date.
Meanwhile, Azure Virtual Desktop manages portions of the services on the customer's behalf. Specifically, Microsoft hosts and manages the infrastructure parts as part of the service. Partners and customers no longer have to manually manage the required infrastructure to let users access session host virtual machines (VMs). The service also has built-in advanced security capabilities like reverse connect, which reduces the risk involved with allowing users to access their remote desktops from anywhere.
virtual-machines Dav4 Dasv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dav4-dasv4-series.md
Title: Dav4 and Dasv4-series description: Specifications for the Dav4 and Dasv4-series VMs.-++ Last updated 02/03/2020-+ # Dav4 and Dasv4-series
Dav4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / Read MBps / Write MBps | Max NICs | Expected network bandwidth (Mbps) | |--|--|--|--|--|--|--|--|
-| Standard_D2a_v4 | 2 | 8 | 50 | 4 | 3000 / 46 / 23 | 2 | 800 |
-| Standard_D4a_v4 | 4 | 16 | 100 | 8 | 6000 / 93 / 46 | 2 | 1600 |
-| Standard_D8a_v4 | 8 | 32 | 200 | 16 | 12000 / 187 / 93 | 4 | 3200 |
-| Standard_D16a_v4| 16 | 64 | 400 |32 | 24000 / 375 / 187 |8 | 6400 |
-| Standard_D32a_v4| 32 | 128| 800 | 32 | 48000 / 750 / 375 |8 | 12800 |
-| Standard_D48a_v4| 48 | 192| 1200 | 32 | 96000 / 1000 / 500 | 8 | 19200 |
-| Standard_D64a_v4| 64 | 256 | 1600 | 32 | 96000 / 1000 / 500 | 8 | 25600 |
-| Standard_D96a_v4| 96 | 384 | 2400 | 32 | 96000 / 1000 / 500 | 8 | 32000 |
+| Standard_D2a_v4 | 2 | 8 | 50 | 4 | 3000 / 46 / 23 | 2 | 2000 |
+| Standard_D4a_v4 | 4 | 16 | 100 | 8 | 6000 / 93 / 46 | 2 | 4000 |
+| Standard_D8a_v4 | 8 | 32 | 200 | 16 | 12000 / 187 / 93 | 4 | 8000 |
+| Standard_D16a_v4| 16 | 64 | 400 |32 | 24000 / 375 / 187 |8 | 10000 |
+| Standard_D32a_v4| 32 | 128| 800 | 32 | 48000 / 750 / 375 |8 | 16000 |
+| Standard_D48a_v4| 48 | 192| 1200 | 32 | 96000 / 1000 / 500 | 8 | 24000 |
+| Standard_D64a_v4| 64 | 256 | 1600 | 32 | 96000 / 1000 / 500 | 8 | 32000 |
+| Standard_D96a_v4| 96 | 384 | 2400 | 32 | 96000 / 1000 / 500 | 8 | 40000 |
## Dasv4-series
Dasv4-series sizes are based on the 2.35Ghz AMD EPYC<sup>TM</sup> 7452 processor
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max cached and temp storage throughput: IOPS / MBps (cache size in GiB) | Max uncached disk throughput: IOPS / MBps | Max burst uncached disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Expected network bandwidth (Mbps) | |--|--|--|--|--|--|--|--|--|--|
-| Standard_D2as_v4|2|8|16|4|4000 / 32 (50)|3200 / 48| 4000/200 | 2 | 800 |
-| Standard_D4as_v4|4|16|32|8|8000 / 64 (100)|6400 / 96| 8000/200 |2 | 1600 |
-| Standard_D8as_v4|8|32|64|16|16000 / 128 (200)|12800 / 192| 16000/400 |4 | 3200 |
-| Standard_D16as_v4|16|64|128|32|32000 / 255 (400)|25600 / 384| 32000/800 |8 | 6400 |
-| Standard_D32as_v4|32|128|256|32|64000 / 510 (800)|51200 / 768| 64000/1600 |8 | 12800 |
-| Standard_D48as_v4|48|192|384|32|96000 / 1020 (1200)|76800 / 1148| 80000/2000 |8 | 19200 |
-| Standard_D64as_v4|64|256|512|32|128000 / 1020 (1600)|80000 / 1200| 80000/2000 |8 | 25600 |
-| Standard_D96as_v4|96|384|768|32|192000 / 1020 (2400)|80000 / 1200| 80000/2000 |8 | 32000 |
+| Standard_D2as_v4|2|8|16|4|4000 / 32 (50)|3200 / 48| 4000/200 | 2 | 2000 |
+| Standard_D4as_v4|4|16|32|8|8000 / 64 (100)|6400 / 96| 8000/200 |2 | 4000 |
+| Standard_D8as_v4|8|32|64|16|16000 / 128 (200)|12800 / 192| 16000/400 |4 | 8000 |
+| Standard_D16as_v4|16|64|128|32|32000 / 255 (400)|25600 / 384| 32000/800 |8 | 10000 |
+| Standard_D32as_v4|32|128|256|32|64000 / 510 (800)|51200 / 768| 64000/1600 |8 | 16000 |
+| Standard_D48as_v4|48|192|384|32|96000 / 1020 (1200)|76800 / 1148| 80000/2000 |8 | 24000 |
+| Standard_D64as_v4|64|256|512|32|128000 / 1020 (1600)|80000 / 1200| 80000/2000 |8 | 32000 |
+| Standard_D96as_v4|96|384|768|32|192000 / 1020 (2400)|80000 / 1200| 80000/2000 |8 | 40000 |
<sup>1</sup> Dasv4-series VMs can [burst](./disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
virtual-machines Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/dedicated-hosts.md
There are two types of quota that are consumed when you deploy a dedicated host.
To request a quota increase, create a support request in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-Provisioning a dedicated host will consume both dedicated host vCPU and the VM family vCPU quota, but it will not consume the regional vCPU.
+Provisioning a dedicated host will consume both dedicated host vCPU and the VM family vCPU quota, but it will not consume the regional vCPU. VMs placed on a dedicated host will not count against VM family vCPU quota. Should a VM be moved off a dedicated host into a multi-tenant environment, the VM will consume VM family vCPU quota.
![Screenshot of the usage and quotas page in the portal](./media/virtual-machines-common-dedicated-hosts/quotas.png)
virtual-machines Disks Enable Host Based Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-enable-host-based-encryption-portal.md
description: Use encryption at host to enable end-to-end encryption on your Azur
Previously updated : 07/01/2021 Last updated : 07/22/2021
When you enable encryption at host, data stored on the VM host is encrypted at rest and flows encrypted to the Storage service. For conceptual information on encryption at host, and other managed disk encryption types, see: [Encryption at host - End-to-end encryption for your VM data](./disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data).
+Temporary disks and ephemeral OS disks are encrypted at rest with platform-managed keys when you enable end-to-end encryption. The OS and data disk caches are encrypted at rest with either customer-managed or platform-managed keys, depending on what you select as the disk encryption type. For example, if a disk is encrypted with customer-managed keys, then the cache for the disk is encrypted with customer-managed keys, and if a disk is encrypted with platform-managed keys then the cache for the disk is encrypted with platform-managed keys.
+ ## Restrictions [!INCLUDE [virtual-machines-disks-encryption-at-host-restrictions](../../includes/virtual-machines-disks-encryption-at-host-restrictions.md)]
Sign in to the Azure portal using the [provided link](https://aka.ms/diskencrypt
> [!IMPORTANT] > You must use the [provided link](https://aka.ms/diskencryptionupdates) to access the Azure portal. Encryption at host is not currently visible in the public Azure portal without using the link.
+## Deploy a VM with platform-managed keys
+
+1. Sign in to the [Azure portal](https://aka.ms/diskencryptionupdates).
+1. Search for **Virtual Machines** and select **+ Add** to create a VM.
+1. Create a new virtual machine, select an appropriate region and a supported VM size.
+1. Fill in the other values on the **Basic** pane as you like, then proceed to the **Disks** pane.
+
+ :::image type="content" source="media/virtual-machines-disks-encryption-at-host-portal/disks-encryption-at-host-basic-blade.png" alt-text="Screenshot of the virtual machine creation basics pane, region and V M size are highlighted.":::
+
+1. On the **Disks** pane, select **Encryption at host**.
+1. Make the remaining selections as you like.
+
+ :::image type="content" source="media/virtual-machines-disks-encryption-at-host-portal/host-based-encryption-platform-keys.png" alt-text="Screenshot of the virtual mahine creation disks pane, encryption at host highlighted.":::
+
+1. Finish the VM deployment process, make selections that fit your environment.
+
+You have now deployed a VM with encryption at host enabled, and the cache for the disk is encrypted using platform-managed keys.
+
+## Deploy a VM with customer-managed keys
+
+Alternatively, you can use customer-managed keys to encrypt your disk caches.
+ ### Create an Azure Key Vault and disk encryption set Once the feature is enabled, you'll need to set up an Azure Key Vault and a disk encryption set, if you haven't already.
Once the feature is enabled, you'll need to set up an Azure Key Vault and a disk
Now that you've setup an Azure Key Vault and disk encryption set, you can deploy a VM and it will use encryption at host.
+1. Sign in to the [Azure portal](https://aka.ms/diskencryptionupdates).
1. Search for **Virtual Machines** and select **+ Add** to create a VM. 1. Create a new virtual machine, select an appropriate region and a supported VM size.
-1. Fill in the other values on the **Basic** blade as you like, then proceed to the **Disks** blade.
+1. Fill in the other values on the **Basic** pane as you like, then proceed to the **Disks** pane.
- :::image type="content" source="media/virtual-machines-disks-encryption-at-host-portal/disks-encryption-at-host-basic-blade.png" alt-text="Screenshot of the virtual machine creation basics blade, region and V M size are highlighted.":::
+ :::image type="content" source="media/virtual-machines-disks-encryption-at-host-portal/disks-encryption-at-host-basic-blade.png" alt-text="Screenshot of the virtual machine creation basics pane, region and V M size are highlighted.":::
-1. On the **Disks** blade, select **Yes** for **Encryption at host**.
+1. On the **Disks** pane, select **Encryption at-rest for customer-managed key** for **SSE encryption type** and select your disk encryption set.
+1. Select **Encryption at host**.
1. Make the remaining selections as you like.
- :::image type="content" source="media/virtual-machines-disks-encryption-at-host-portal/disks-encryption-at-host-disk-blade.png" alt-text="Screenshot of the virtual machine creation disks blade, encryption at host is highlighted.":::
+ :::image type="content" source="media/virtual-machines-disks-encryption-at-host-portal/disks-host-based-encryption-customer-managed-keys.png" alt-text="Screenshot of the virtual machine creation disks pane, encryption at host is highlighted, customer-managed keys selected.":::
1. Finish the VM deployment process, make selections that fit your environment.
-You have now deployed a VM with encryption at host enabled, all its associated disks will be encrypted using encryption at host.
+You have now deployed a VM with encryption at host enabled.
## Disable host based encryption
Make sure your VM is deallocated first, you cannot disable encryption at host un
1. On your VM, select **Disks** and then select **Additional settings**.
- :::image type="content" source="media/virtual-machines-disks-encryption-at-host-portal/disks-encryption-host-based-encryption-additional-settings.png" alt-text="Screenshot of the Disks blade on a VM, Additional Settings is highlighted.":::
+ :::image type="content" source="media/virtual-machines-disks-encryption-at-host-portal/disks-encryption-host-based-encryption-additional-settings.png" alt-text="Screenshot of the Disks pane on a VM, Additional Settings is highlighted.":::
1. Select **No** for **Encryption at host** then select **Save**.
virtual-machines Image Version Another Gallery Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/image-version-another-gallery-powershell.md
$destinationImgDef = New-AzGalleryImageDefinition `
## Create the image version
-Create an image version using [New-AzGalleryImageVersion](/powershell/module/az.compute/new-azgalleryimageversion). You will need to pass in the ID of the source image in the `-Source` parameter for creating the image version in your destination gallery.
+Create an image version using [New-AzGalleryImageVersion](/powershell/module/az.compute/new-azgalleryimageversion). You will need to pass in the ID of the source image in the `-SourceImageId` parameter for creating the image version in your destination gallery.
Allowed characters for image version are numbers and periods. Numbers must be within the range of a 32-bit integer. Format: *MajorVersion*.*MinorVersion*.*Patch*.
$job = $imageVersion = New-AzGalleryImageVersion `
-ResourceGroupName myDestinationRG ` -Location WestUS ` -TargetRegion $targetRegions `
- -Source $sourceImgVer.Id.ToString() `
+ -SourceImageId $sourceImgVer.Id.ToString() `
-PublishingProfileEndOfLifeDate '2020-12-01' ` -asJob ```
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/linux/overview.md
Microsoft works closely with partners to ensure the images available are updated
* Linux on Azure - [Endorsed Distributions](endorsed-distros.md) * SUSE - [Azure Marketplace - SUSE Linux Enterprise Server](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=suse) * Red Hat - [Azure Marketplace - Red Hat Enterprise Linux](https://azuremarketplace.microsoft.com/marketplace/apps?search=Red%20Hat%20Enterprise%20Linux)
-* Canonical - [Azure Marketplace - Ubuntu Server](https://azuremarketplace.microsoft.com/marketplace/apps/Canonical.UbuntuServer)
+* Canonical - [Azure Marketplace - Ubuntu Server](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&filters=partners&search=canonical)
* Debian - [Azure Marketplace - Debian](https://azuremarketplace.microsoft.com/marketplace/apps?search=Debian&page=1) * FreeBSD - [Azure Marketplace - FreeBSD](https://azuremarketplace.microsoft.com/marketplace/apps?search=freebsd&page=1) * Flatcar - [Azure Marketplace - Flatcar Container Linux](https://azuremarketplace.microsoft.com/marketplace/apps?search=Flatcar&page=1)
virtual-machines Hana Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-architecture.md
vm-linux Previously updated : 05/19/2021 Last updated : 07/21/2021
In this article, we'll describe the architecture for deploying SAP HANA on Azure Large Instances (otherwise known as BareMetal Infrastructure).
-At a high level, the SAP HANA on Azure (Large Instances) solution has the SAP application layer on virtual machines (VMs). The database layer is on the SAP certified HANA Large Instance, which located in the same Azure region as the Azure IaaS VMs.
+At a high level, the SAP HANA on Azure (Large Instances) solution has the SAP application layer on virtual machines (VMs). The database layer is on the SAP certified HANA Large Instance (HLI). The HLI is located in the same Azure region as the Azure IaaS VMs.
> [!NOTE] > Deploy the SAP application layer in the same Azure region as the SAP database management system (DBMS) layer. This rule is well documented in published information about SAP workloads on Azure. ## Architectural overview
-The overall architecture of SAP HANA on Azure (Large Instances) provides an SAP TDI-certified hardware configuration. The hardware is a non-virtualized, bare metal, high-performance server for the SAP HANA database. It also provides the flexibility of Azure to scale resources for the SAP application layer to meet your needs.
+The overall architecture of SAP HANA on Azure (Large Instances) provides an SAP TDI-certified hardware configuration. The hardware is a non-virtualized, bare metal, high-performance server for the SAP HANA database. It gives you the flexibility to scale resources for the SAP application layer to meet your needs.
![Architectural overview of SAP HANA on Azure (Large Instances)](./media/hana-overview-architecture/image1-architecture.png)
The architecture shown is divided into three sections:
- [Use SAP on Windows virtual machines](./get-started.md?toc=/azure/virtual-machines/linux/toc.json) - [Use SAP solutions on Azure virtual machines](get-started.md) -- **Left**: Shows the SAP HANA TDI-certified hardware in the Azure Large Instance stamp. The HANA Large Instance units connect to the virtual networks of your Azure subscription via same technology on-premises connects into Azure. In May 2019, we introduced an optimization that allows communication between the HANA Large Instance units and the Azure VMs without the ExpressRoute Gateway. This optimization, called ExpressRoute FastPath, is shown in the preceding diagram by the red lines.
+- **Left**: Shows the SAP HANA TDI-certified hardware in the Azure Large Instance stamp. The HANA Large Instance units connect to the virtual networks of your Azure subscription using the same technology on-premises servers use to connect into Azure. In May 2019, we introduced an optimization that allows communication between the HANA Large Instance units and the Azure VMs without the ExpressRoute Gateway. This optimization, called ExpressRoute FastPath, is shown in the preceding diagram by the red lines.
## Components of the Azure Large Instance stamp
The Azure Large Instance stamp itself combines the following components:
## Tenants
-Within the multi-tenant infrastructure of the Large Instance stamp, customers are deployed as isolated tenants. At deployment of the tenant, you name an Azure subscription within your Azure enrollment. This Azure subscription is the one that the HANA Large Instance is billed against. These tenants have a 1:1 relationship to the Azure subscription. For a network, it's possible to access a HANA Large Instance unit deployed in one tenant in one Azure region from different virtual networks that belong to different Azure subscriptions. Those Azure subscriptions must belong to the same Azure enrollment.
+Within the multi-tenant infrastructure of the Large Instance stamp, customers are deployed as isolated tenants. At deployment of the tenant, you name an Azure subscription within your Azure enrollment. This Azure subscription is the one the HANA Large Instance is billed against. These tenants have a 1:1 relationship to the Azure subscription.
+
+For a network, it's possible to access a HANA Large Instance deployed in one tenant in one Azure region from different virtual networks belonging to different Azure subscriptions. Those Azure subscriptions must belong to the same Azure enrollment.
## Availability across regions
virtual-machines Hana Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/hana-network-architecture.md
vm-linux Previously updated : 05/25/2021 Last updated : 07/21/2021
The networking architecture for HANA Large Instances can be separated into four
The following two requirements still hold even though you use Hana Large Instances: - Your on-premises assets must connect through ExpressRoute to Azure.-- You need one or more virtual networks that run your VMs. These VMs host the application layer that connects to the HANA instances hosted in HANA Large Instance units.
+- You need one or more virtual networks that run your VMs. These VMs host the application layer that connects to the HANA instances hosted in HANA Large Instances.
-The differences to SAP deployments in Azure are:
+The differences in SAP deployments in Azure are:
-- The HANA Large Instance units of your tenant are connected through another ExpressRoute circuit into your virtual networks. The on-premises to Azure virtual network ExpressRoute circuits and the circuits between Azure virtual networks and HANA Large Instances don't share the same routers. Their load conditions remain separate.-- The workload profile between the SAP application layer and the HANA Large Instance is of a different nature, with many small requests and bursts like data transfers (result sets) from SAP HANA into the application layer.
+- The HANA Large Instances of your tenant are connected through another ExpressRoute circuit into your virtual networks. The on-premises to Azure virtual network ExpressRoute circuits and the circuits between Azure virtual networks and HANA Large Instances don't share the same routers. Their load conditions remain separate.
+- The workload profile between the SAP application layer and the HANA Large Instance is of a different nature. SAP HANA generates many small requests and bursts like data transfers (result sets) into the application layer.
- The SAP application architecture is more sensitive to network latency than typical scenarios where data is exchanged between on-premises and Azure. - The Azure ExpressRoute gateway has at least two ExpressRoute connections. One circuit is connected from on-premises and one is connected from the HANA Large Instance. This configuration leaves only room for two more circuits from different MSEEs to connect to the ExpressRoute Gateway. This restriction is independent of the usage of ExpressRoute FastPath. All the connected circuits share the maximum bandwidth for incoming data of the ExpressRoute gateway.
-With Revision 3 of HANA Large Instance stamps, the network latency between VMs and HANA Large Instance units can be higher than typical VM-to-VM network round-trip latencies. Depending on the Azure region, values can exceed the 0.7 ms round-trip latency classified as below average in [SAP Note #1100926 - FAQ: Network performance](https://launchpad.support.sap.com/#/notes/1100926/E). Depending on Azure Region and the tool to measure network round-trip latency between an Azure VM and HANA Large Instance unit, the latency can be up to 2 milliseconds. Nevertheless, customers deploy SAP HANA-based production SAP applications successfully on SAP HANA Large Instance. Make sure you test your business processes thoroughly in Azure HANA Large Instance. A new functionality, called ExpressRoute FastPath, is able to reduce the network latency between HANA Large Instances and application layer VMs in Azure substantially (see below).
+With Revision 3 of HANA Large Instance stamps, the network latency between VMs and HANA Large Instance units can be higher than typical VM-to-VM network round-trip latencies. Depending on the Azure region, values can exceed the 0.7-ms round-trip latency classified as below average in [SAP Note #1100926 - FAQ: Network performance](https://launchpad.support.sap.com/#/notes/1100926/E). Depending on Azure Region and the tool to measure network round-trip latency between an Azure VM and HANA Large Instance, the latency can be up to 2 milliseconds. Still, customers successfully deploy SAP HANA-based production SAP applications on SAP HANA Large Instances. Make sure you test your business processes thoroughly with Azure HANA Large Instances. A new functionality, called ExpressRoute FastPath, can substantially reduce the network latency between HANA Large Instances and application layer VMs in Azure (see below).
-With Revision 4 of HANA Large Instance stamps, the network latency between Azure VMs deployed in proximity to the HANA Large Instance stamp improves. Latency meets the average or better than average classification as documented in [SAP Note #1100926 - FAQ: Network performance](https://launchpad.support.sap.com/#/notes/1100926/E) if Azure ExpressRoute FastPath is configured (see below). In order to deploy Azure VMs in close proximity to HANA Large Instance units of Revision 4, you need to apply [Azure Proximity Placement Groups](../../co-location.md). How proximity placement groups can be used to locate the SAP application layer in the same Azure datacenter as Revision 4 hosted HANA Large Instance units is described in [Azure Proximity Placement Groups for optimal network latency with SAP applications](sap-proximity-placement-scenarios.md).
+Revision 4 of HANA Large Instance stamps improves network latency between Azure VMs deployed in proximity to the HANA Large Instance stamp. Latency meets the average or better than average classification as documented in [SAP Note #1100926 - FAQ: Network performance](https://launchpad.support.sap.com/#/notes/1100926/E) if Azure ExpressRoute FastPath is configured (see below).
-To provide deterministic network latency between VMs and HANA Large Instance, the choice of the ExpressRoute gateway SKU is essential. Unlike the traffic patterns between on-premises and VMs, the traffic patterns between VMs and HANA Large Instances can develop small but high bursts of requests and data volumes. To handle such bursts, we highly recommend the use of the UltraPerformance gateway SKU. For the Type II class of HANA Large Instance SKUs, the use of the UltraPerformance gateway SKU as a ExpressRoute gateway is mandatory.
+To deploy Azure VMs in proximity to HANA Large Instances of Revision 4, you need to apply [Azure Proximity Placement Groups](../../co-location.md). Proximity placement groups can be used to locate the SAP application layer in the same Azure datacenter as Revision 4 hosted HANA Large Instances. For more information, see [Azure Proximity Placement Groups for optimal network latency with SAP applications](sap-proximity-placement-scenarios.md).
+
+To provide deterministic network latency between VMs and HANA Large Instance, using the ExpressRoute gateway SKU is essential. Unlike the traffic patterns between on-premises and VMs, the traffic patterns between VMs and HANA Large Instances can develop small but high bursts of requests and data volumes. To handle such bursts, we highly recommend using the UltraPerformance gateway SKU. For the Type II class of HANA Large Instance SKUs, using the UltraPerformance gateway SKU as a ExpressRoute gateway is mandatory.
> [!IMPORTANT] > Given the overall network traffic between the SAP application and database layers, only the HighPerformance or UltraPerformance gateway SKUs for virtual networks are supported for connecting to SAP HANA on Azure (Large Instances). For HANA Large Instance Type II SKUs, only the UltraPerformance gateway SKU is supported as a ExpressRoute gateway. Exceptions apply when using ExpressRoute FastPath (see below). ### ExpressRoute FastPath
-ExpressRoute FastPath was released in May 2019 specifically to lower the latency between HANA Large Instances to Azure virtual networks that host the SAP application VMs. In this solution, the data flows between VMs and HANA Large Instances are no longer routed through the ExpressRoute gateway. Instead, the VMs assigned in the subnet(s) of the Azure virtual network directly communicate with the dedicated enterprise edge router.
+In May 2019, we released ExpressRoute FastPath. FastPath lowers the latency between HANA Large Instances and Azure virtual networks that host the SAP application VMs. With FastPath, the data flows between VMs and HANA Large Instances aren't routed through the ExpressRoute gateway. The VMs assigned in the subnet(s) of the Azure virtual network directly communicate with the dedicated enterprise edge router.
> [!IMPORTANT] > ExpressRoute FastPath requires that the subnets running the SAP application VMs are in the same Azure virtual network that is connected to the HANA Large Instances. VMs located in Azure virtual networks that are peered with the Azure virtual network connected to the HANA Large Instance units do not benefit from ExpressRoute FastPath. As a result, typical hub and spoke virtual network designs, where the ExpressRoute circuits connect against a hub virtual network and virtual networks containing the SAP application layer (spokes) are peered, the optimization by ExpressRoute FastPath won't work. ExpressRoute FastPath also doesn't currently support user defined routing rules (UDR). For more information, see [ExpressRoute virtual network gateway and FastPath](../../../expressroute/expressroute-about-virtual-network-gateways.md).
The on-premises infrastructure previously shown is connected through ExpressRout
> [!NOTE] > To run SAP landscapes in Azure, connect to the enterprise edge router closest to the Azure region in the SAP landscape. HANA Large Instance stamps are connected through dedicated enterprise edge routers to minimize network latency between VMs in Azure IaaS and HANA Large Instance stamps.
-The ExpressRoute gateway for the VMs that host SAP application instances are connected to one ExpressRoute circuit that connects to on-premises. The same virtual network is connected to a separate enterprise edge router dedicated to connecting to Large Instance stamps. Using ExpressRoute FastPath, again, the data flow from HANA Large Instances to the SAP application layer VMs isn't routed through the ExpressRoute gateway. This configuration reduces the network round-trip latency.
+The ExpressRoute gateway for the VMs that host SAP application instances are connected to one ExpressRoute circuit that connects to on-premises. The same virtual network is connected to a separate enterprise edge router. That edge router is dedicated to connecting to Large Instance stamps. Again, with FastPath, the data flow from HANA Large Instances to the SAP application layer VMs isn't routed through the ExpressRoute gateway. This configuration reduces the network round-trip latency.
This system is a straightforward example of a single SAP system. The SAP application layer is hosted in Azure. The SAP HANA database runs on SAP HANA on Azure (Large Instances). The assumption is that the ExpressRoute gateway bandwidth of 2-Gbps or 10-Gbps throughput doesn't represent a bottleneck. ## Multiple SAP systems or large SAP systems
-If multiple SAP systems or large SAP systems are deployed to connect to SAP HANA on Azure (Large Instances), the throughput of the ExpressRoute gateway might become a bottleneck. In that case, you can split the application layers into multiple virtual networks. You can also split the application layers if you want to isolate production and non-production systems in different Azure virtual networks.
+If you deploy multiple SAP systems or large SAP systems connecting to SAP HANA (Large Instances), the throughput of the ExpressRoute gateway might become a bottleneck. In that case, split the application layers into multiple virtual networks. You can also split the application layers if you want to isolate production and non-production systems in different Azure virtual networks.
-You might create a special virtual network that connects to HANA Large Instance for cases such as:
+You might create a special virtual network that connects to HANA Large Instances when:
-- Performing backups directly from the HANA instances in HANA Large Instance to a VM in Azure that hosts NFS shares.-- Copying large backups or other files from HANA Large Instance units to disk space managed in Azure.
+- Doing backups directly from the HANA instances in a HANA Large Instance to a VM in Azure that hosts NFS shares.
+- Copying large backups or other files from HANA Large Instances to disk space managed in Azure.
-Use a separate virtual network to host VMs that manage storage for mass transfer of data between HANA Large Instances and Azure. This arrangement avoids large file or data transfer from HANA Large Instances to Azure on the ExpressRoute gateway that serves the VMs that run the SAP application layer.
+Use a separate virtual network to host VMs that manage storage for mass transfer of data between HANA Large Instances and Azure. This arrangement avoids large file or data transfer from HANA Large Instances to Azure on the ExpressRoute gateway that serves the VMs running the SAP application layer.
-For a more scalable network architecture:
+For a more expandable network architecture:
- Use multiple virtual networks for a single, larger SAP application layer. - Deploy one separate virtual network for each SAP system deployed, compared to combining these SAP systems in separate subnets under the same virtual network.
- The following diagram shows a more scalable networking architecture for SAP HANA on Azure (Large Instances):
+ The following diagram shows a more expandable networking architecture for SAP HANA on Azure (Large Instances):
![Deploy SAP application layer over multiple virtual networks](./media/hana-overview-architecture/image4-networking-architecture.png)
Depending on the rules and restrictions you want to apply between the different
By default deployment, three network routing considerations are important for SAP HANA on Azure (Large Instances): -- SAP HANA on Azure (Large Instances) can be accessed only through Azure VMs and the dedicated ExpressRoute connection, not directly from on-premises. Direct access from on-premises to the HANA Large Instance units, as delivered by Microsoft to you, isn't possible immediately. The transitive routing restrictions are due to the current Azure network architecture used for SAP HANA Large Instances. Some administration clients and any applications that need direct access, such as SAP Solution Manager running on-premises, can't connect to the SAP HANA database. For exceptions, see the following section, [Direct Routing to HANA Large Instances](#direct-routing-to-hana-large-instances).
+- SAP HANA on Azure (Large Instances) can be accessed only through Azure VMs and the dedicated ExpressRoute connection, not directly from on-premises. Direct access from on-premises to the HANA Large Instance units, as delivered by Microsoft to you, isn't possible immediately. The transitive routing restrictions are because of the current Azure network architecture used for SAP HANA Large Instances. Some administration clients and any applications that need direct access, such as SAP Solution Manager running on-premises, can't connect to the SAP HANA database. For exceptions, see the following section, [Direct Routing to HANA Large Instances](#direct-routing-to-hana-large-instances).
-- If you have HANA Large Instance units deployed in two different Azure regions for disaster recovery, the same transient routing restrictions apply as in the past. In other words, IP addresses of a HANA Large Instance unit in one region (for example, US West) weren't routed to a HANA Large Instance unit deployed in another region (for example, US East). This restriction is independent of the use of Azure network peering across regions or cross-connecting the ExpressRoute circuits that connect HANA Large Instance units to virtual networks. For a graphic representation, see the figure in the section, [Use HANA Large Instance units in multiple regions](#use-hana-large-instance-units-in-multiple-regions). This restriction, which came with the deployed architecture, prohibited the immediate use of HANA system replication for disaster recovery. For recent changes, again, see [Use HANA Large Instance units in multiple regions](#use-hana-large-instance-units-in-multiple-regions).
+- If you have HANA Large Instance units deployed in two different Azure regions for disaster recovery, the same transient routing restrictions apply as in the past. In other words, IP addresses of a HANA Large Instance in one region (for example, US West) weren't routed to a HANA Large Instance deployed in another region (for example, US East). This restriction is independent of the use of Azure network peering across regions or cross-connecting the ExpressRoute circuits that connect HANA Large Instances to virtual networks. For a graphic representation, see the figure in the section, [Use HANA Large Instance units in multiple regions](#use-hana-large-instance-units-in-multiple-regions). This restriction, which came with the deployed architecture, prohibited the immediate use of HANA system replication for disaster recovery. For recent changes, again, see [Use HANA Large Instance units in multiple regions](#use-hana-large-instance-units-in-multiple-regions).
-- SAP HANA on Azure (Large Instances) units have an assigned IP address from the server IP pool address range that you submitted when requesting the HANA Large Instance deployment. For more information, see [SAP HANA (Large Instances) infrastructure and connectivity on Azure](hana-overview-infrastructure-connectivity.md). This IP address is accessible through the Azure subscriptions and circuit that connects Azure virtual networks to HANA Large Instances. The IP address assigned out of that server IP pool address range is directly assigned to the hardware unit. It's *not* assigned through network address translation (NAT) anymore, as was the case in the first deployments of this solution.
+- SAP HANA on Azure Large Instances has an assigned IP address from the server IP pool address range that you submitted when requesting the HANA Large Instance deployment. For more information, see [SAP HANA (Large Instances) infrastructure and connectivity on Azure](hana-overview-infrastructure-connectivity.md). This IP address is accessible through the Azure subscriptions and circuit that connects Azure virtual networks to HANA Large Instances. The IP address assigned out of that server IP pool address range is directly assigned to the hardware unit. It's *not* assigned through network address translation (NAT) anymore, as was the case in the first deployments of this solution.
### Direct Routing to HANA Large Instances
By default, the transitive routing doesn't work in these scenarios:
There are three ways to enable transitive routing in those scenarios: - A reverse-proxy to route data, to and from. For example, F5 BIG-IP, NGINX with Traffic Manager deployed in the Azure virtual network that connects to HANA Large Instances and to on-premises as a virtual firewall/traffic routing solution.-- Using IPTables rules in a Linux VM to enable routing between on-premises locations and HANA Large Instance units, or between HANA Large Instance units in different regions. The VM running IPTables must be deployed in the Azure virtual network that connects to HANA Large Instances and to on-premises. The VM must be sized accordingly so that the network throughput of the VM is sufficient for the expected network traffic. For more information on VM network bandwidth, check the article [Sizes of Linux virtual machines in Azure](../../sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
+- Using IPTables rules in a Linux VM to enable routing between on-premises locations and HANA Large Instance units, or between HANA Large Instance units in different regions. The VM running IPTables must be deployed in the Azure virtual network that connects to HANA Large Instances and to on-premises. The VM must be sized so that the network throughput of the VM is sufficient for the expected network traffic. For more information on VM network bandwidth, check the article [Sizes of Linux virtual machines in Azure](../../sizes.md?toc=%2fazure%2fvirtual-network%2ftoc.json).
- [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/) would be another solution to enable direct traffic between on-premises and HANA Large instance units. All the traffic of these solutions would be routed through an Azure virtual network. As such, the traffic could also be restricted by the soft appliances used or by Azure Network Security Groups. In this way, specific IP addresses or IP address ranges from on-premises could either be blocked or explicitly allowed access to HANA Large Instances.
Microsoft introduced a new functionality called [ExpressRoute Global Reach](../.
##### Direct Access from on-premises
-In the Azure regions where Global Reach is offered, you can request enabling the Global Reach functionality for your ExpressRoute circuit that connects your on-premises network to the Azure virtual network that connects to your HANA Large Instance units. There are some cost implications for the on-premises side of your ExpressRoute circuit. For more information, see the pricing for [Global Reach Add-On](https://azure.microsoft.com/pricing/details/expressroute/). There are no added costs for you related to the circuit that connects the HANA Large Instance unit(s) to Azure.
+In Azure regions where Global Reach is offered, you can request enabling Global Reach for your ExpressRoute circuit. That circuit connects your on-premises network to the Azure virtual network that connects to your HANA Large Instances. There are costs for the on-premises side of your ExpressRoute circuit. For more information, see the pricing for [Global Reach Add-On](https://azure.microsoft.com/pricing/details/expressroute/). You won't pay added costs for the circuit that connects the HANA Large Instances to Azure.
> [!IMPORTANT] > When using Global Reach to enable direct access between your HANA Large Instance units and on-premises assets, the network data and control flow is **not routed through Azure virtual networks**. Instead, network data and control flow is routed directly between the Microsoft enterprise exchange routers. So any NSG or ASG rules, or any type of firewall, NVA, or proxy you deployed in an Azure virtual network, won't be touched. **If you use ExpressRoute Global Reach to enable direct access from on-premises to HANA Large instance units, restrictions and permissions to access HANA large Instance units need to be defined in firewalls on the on-premises side.**
For more information on how to enable ExpressRoute Global Reach, see [Connect a
## Internet connectivity of HANA Large Instance
-HANA Large Instances do *not* have direct internet connectivity. As an example, this limitation might restrict your ability to register the OS image directly with the OS vendor. You might need to work with your local SUSE Linux Enterprise Server Subscription Management Tool server or Red Hat Enterprise Linux Subscription Manager.
+HANA Large Instances *don't* have direct internet connectivity. As an example, this limitation might restrict your ability to register the OS image directly with the OS vendor. You might need to work with your local SUSE Linux Enterprise Server Subscription Management Tool server or Red Hat Enterprise Linux Subscription Manager.
## Data encryption between VMs and HANA Large Instance
-Data transferred between HANA Large Instances and VMs isn't encrypted. However, purely for the exchange between the HANA DBMS side and JDBC/ODBC-based applications, you can enable encryption of traffic. For more information, see [Secure Communication Between SAP HANA and JDBC/ODBC Clients](https://help.sap.com/viewer/102d9916bf77407ea3942fef93a47da8/1.0.11/en-US/dbd3d887bb571014bf05ca887f897b99.html).
+Data transferred between HANA Large Instances and VMs isn't encrypted. Purely for the exchange between the HANA DBMS side and JDBC/ODBC-based applications, however, you can enable encryption of traffic. For more information, see [Secure Communication Between SAP HANA and JDBC/ODBC Clients](https://help.sap.com/viewer/102d9916bf77407ea3942fef93a47da8/1.0.11/en-US/dbd3d887bb571014bf05ca887f897b99.html).
## Use HANA Large Instance units in multiple regions For disaster recovery, you need to have HANA Large Instance units in multiple Azure regions. Using only Azure [Global Vnet Peering](../../../virtual-network/virtual-network-peering-overview.md), by default the transitive routing won't work between HANA Large Instance tenants in different regions. Global Reach, however, opens up communication between HANA Large Instance units in different regions. This scenario using ExpressRoute Global Reach enables: - HANA system replication without any more proxies or firewalls.
+ - Copying backups between HANA Large Instance units in different regions to make system copies or do system refreshes.
![Virtual network connected to Azure Large Instance stamps in different Azure regions](./media/hana-overview-architecture/image8-multiple-regions.png)
virtual-network Public Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/public-ip-address-prefix.md
# Public IP address prefix
-A public IP address prefix is a reserved range of [public IP addresses](./public-ip-addresses.md#public-ip-addresses) in Azure. Public IP prefixes are assigned from a [pool of addresses](https://www.microsoft.com/download/details.aspx?id=56519) in each Azure region.
+A public IP address prefix is a reserved range of [public IP addresses](./public-ip-addresses.md#public-ip-addresses) in Azure. Public IP prefixes are assigned from a pool of addresses in each Azure region.
You create a public IP address prefix in an Azure region and subscription by specifying a name and prefix size. The prefix size is the number of addresses available for use. Public IP address prefixes consist of IPv4 or IPv6 addresses. After the public IP prefix is created, you can create public IP addresses. ## Benefits
For costs associated with using Azure Public IPs, both individual IP addresses a
## Next steps -- [Create](manage-public-ip-address-prefix.md) a public IP address prefix
+- [Create](manage-public-ip-address-prefix.md) a public IP address prefix
virtual-wan About Virtual Hub Routing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/about-virtual-hub-routing.md
Consider the following when configuring Virtual WAN routing:
* Branch-to-branch via Azure Firewall is currently not supported. * When using Azure Firewall in multiple regions, all spoke virtual networks must be associated to the same route table. For example, having a subset of the VNets going through the Azure Firewall while other VNets bypass the Azure Firewall in the same virtual hub is not possible. * A single next hop IP can be configured per VNet connection.-
+* All information pertaining to 0.0.0.0/0 route is confined to a local hub's route table. This route does not propagate across hubs.
## Next steps * To configure routing, see [How to configure virtual hub routing](how-to-virtual-hub-routing.md).
virtual-wan Monitor Virtual Wan https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/monitor-virtual-wan.md
The following metrics are available for Azure point-to-site VPN gateways:
| Metric | Description| | | | | **Gateway P2S Bandwidth** | Average point-to-site aggregate bandwidth of a gateway in bytes per second. |
-| **P2S Connection Count** |Point-to-site connection count of a gateway. |
+| **P2S Connection Count** |Point-to-site connection count of a gateway. Point-to-site connection count of a gateway. To ensure you are viewing accurate Metrics in Azure Monitor, please select the **Aggregation Type** for **P2S Connection Count** as **Sum**. You may also select **Max** if you also Split By **Instance**. |
### Azure ExpressRoute gateways
vpn-gateway Active Active Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/active-active-portal.md
+
+ Title: 'Configure active-active VPN gateways: Azure portal'
+
+description: Learn how to configure active-active virtual network gateways using the Azure portal.
+++++ Last updated : 07/22/2021++++
+# Configure active-active VPN gateways using the portal
+
+This article helps you create highly available active-active VPN gateways using the Resource Manager deployment model and Azure portal. You can also configure an active-active gateway using [PowerShell](vpn-gateway-activeactive-rm-powershell.md).
+
+To achieve high availability for cross-premises and VNet-to-VNet connectivity, you should deploy multiple VPN gateways and establish multiple parallel connections between your networks and Azure. See [Highly Available cross-premises and VNet-to-VNet connectivity](vpn-gateway-highlyavailable.md) for an overview of connectivity options and topology.
+
+> [!IMPORTANT]
+> The active-active mode is available for all SKUs except Basic or Standard. For more information, see [Configuration settings](vpn-gateway-about-vpn-gateway-settings.md#gwsku).
+>
+
+The steps in this article help you configure a VPN gateway in active-active mode. There are a few differences between active-active and active-standby modes. The other properties are the same as the non-active-active gateways.
+
+* Active-active gateways have two Gateway IP configurations and two public IP addresses.
+* Active-active gateways have active-active setting enabled.
+* The virtual network gateway SKU can't be Basic or Standard.
+
+If you already have a VPN gateway, you can [Update an existing VPN gateway](#update) from active-standby to active-active mode, or from active-active to active-standby mode.
+
+## <a name="vnet"></a>Create a VNet
+
+If you don't already have a VNet that you want to use, create a VNet using the following values:
+
+* **Resource group:** TestRG1
+* **Name:** VNet1
+* **Region:** (US) East US
+* **IPv4 address space:** 10.1.0.0/16
+* **Subnet name:** FrontEnd
+* **Subnet address space:** 10.1.0.0/24
++
+## <a name="gateway"></a>Create an active-active VPN gateway
+
+In this step, you create an active-active virtual network gateway (VPN gateway) for your VNet. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
+
+Create a virtual network gateway using the following values:
+
+* **Name:** VNet1GW
+* **Region:** East US
+* **Gateway type:** VPN
+* **VPN type:** Route-based
+* **SKU:** VpnGw2
+* **Generation:** Generation2
+* **Virtual network:** VNet1
+* **Gateway subnet address range:** 10.1.255.0/27
+* **Public IP address:** Create new
+* **Public IP address name:** VNet1GWpip
++
+A gateway can take up to 45 minutes to fully create and deploy. You can see the deployment status on the Overview page for your gateway. After the gateway is created, you can view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
++
+## <a name ="update"></a> Update an existing VPN gateway
+
+This section helps you change an existing Azure VPN gateway from active-standby to active-active mode, and from active-active to active-standby mode. When you change an active-standby gateway to active-active, you create another public IP address, then add a second gateway IP configuration.
+
+### Change active-standby to active-active
+
+Use the following steps to convert active-standby mode gateway to active-active mode. If your gateway was created using the Resource Manager deployment model, you can also upgrade the SKU on this page.
+
+1. Navigate to the page for your virtual network gateway.
+
+1. On the left menu, select **Configuration**.
+
+1. On the **Configuration** page, configure the following settings:
+
+ * Change the Active-active mode to **Enabled**.
+ * Click **Create another gateway IP configuration**.
+
+ :::image type="content" source="./media/active-active-portal/configuration.png" alt-text="Screenshot shows the Configuration page.":::
+
+1. On the **Choose public IP address** page and either specify an existing public IP address that meets the criteria, or select **+Create new** to create a new public IP address to use for the second VPN gateway instance.
+
+1. On the **Create public IP address** page, select the **Basic** SKU, then click **OK**.
+
+1. At the top of the **Configuration** page, click **Save**. This update can take about 30-45 minutes to complete.
+
+### Change active-active to active-standby
+
+Use the following steps to convert active-active mode gateway to active-standby mode.
+
+1. Navigate to the page for your virtual network gateway.
+
+1. On the left menu, select **Configuration**.
+
+1. On the **Configuration** page, change the Active-active mode to **Disabled**.
+
+1. At the top of the **Configuration** page, click **Save**.
+
+## Next steps
+
+To configure connections, see the following articles:
+
+* [Site-to-Site VPN connections](./tutorial-site-to-site-portal.md)
+* [VNet-to-VNet connections](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md#configure-the-vnet1-gateway-connection)
vpn-gateway Howto Point To Site Multi Auth https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/howto-point-to-site-multi-auth.md
Previously updated : 02/22/2021 Last updated : 07/21/2021
You can use the following values to create a test environment, or refer to these
* **Resource Group:** TestRG1 * **Location:** East US * **GatewaySubnet:** 10.1.255.0/27<br>
-* **Virtual network gateway name:** VNet1GW
+* **SKU:** VpnGw2
+* **Generation:** Generation 2
* **Gateway type:** VPN * **VPN type:** Route-based * **Public IP address name:** VNet1GWpip
vpn-gateway Tutorial Create Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/tutorial-create-gateway-portal.md
Previously updated : 07/16/2021 Last updated : 07/21/2021 #Customer intent: I want to create a VPN gateway for my virtual network so that I can connect to my VNet and communicate with resources remotely.
Create a virtual network gateway using the following values:
* **Region:** East US * **Gateway type:** VPN * **VPN type:** Route-based
-* **SKU:** VpnGw1
-* **Generation:** Generation1
+* **SKU:** VpnGw2
+* **Generation:** Generation 2
* **Virtual network:** VNet1 * **Gateway subnet address range:** 10.1.255.0/27 * **Public IP address:** Create new
vpn-gateway Tutorial Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/tutorial-site-to-site-portal.md
Previously updated : 04/28/2021 Last updated : 07/21/2021
Create a VPN gateway using the following values:
* **Region:** East US * **Gateway type:** VPN * **VPN type:** Route-based
-* **SKU:** VpnGw1
-* **Generation:** Generation1
+* **SKU:** VpnGw2
+* **Generation:** Generation 2
* **Virtual network:** VNet1 * **Gateway subnet address range:** 10.1.255.0/27 * **Public IP address:** Create new
vpn-gateway Vpn Gateway Howto Point To Site Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md
Previously updated : 06/03/2021 Last updated : 07/21/2021
You can use the following values to create a test environment, or refer to these
* **Virtual network gateway name:** VNet1GW * **Gateway type:** VPN * **VPN type:** Route-based
+* **SKU:** VpnGw2
+* **Generation:** Generation 2
* **Public IP address name:** VNet1GWpip * **Connection type:** Point-to-site * **Client address pool:** 172.16.201.0/24<br>VPN clients that connect to the VNet using this Point-to-Site connection receive an IP address from the client address pool.
vpn-gateway Vpn Gateway Howto Vnet Vnet Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md
Previously updated : 04/28/2021 Last updated : 07/21/2021
This article shows you how to connect VNets by using the VNet-to-VNet connection
* **Virtual network gateway settings** * **Name**: VNet1GW * **Resource group**: East US
- * **Generation**: Generation 1
+ * **Generation**: Generation 2
* **Gateway type**: Select **VPN**.
- * **VPN type**: Select **Route*based**.
- * **SKU**: VpnGw1
+ * **VPN type**: Select **Route-based**.
+ * **SKU**: VpnGw2
* **Virtual network**: VNet1 * **Gateway subnet address range**: 10.1.255.0/27 * **Public IP address**: Create new
This article shows you how to connect VNets by using the VNet-to-VNet connection
* **Virtual network gateway settings** * **Name**: VNet4GW * **Resource group**: West US
- * **Generation**: Generation 1
+ * **Generation**: Generation 2
* **Gateway type**: Select **VPN**. * **VPN type**: Select **Route-based**.
- * **SKU**: VpnGw1
+ * **SKU**: VpnGw2
* **Virtual network**: VNet4 * **Gateway subnet address range**: 10.41.255.0/27 * **Public IP address**: Create new
vpn-gateway Vpn Gateway Howto Vnet Vnet Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vs-azure-tools-storage-manage-with-storage-explorer.md
Microsoft Azure Storage Explorer is a standalone app that makes it easy to work
In this article, you'll learn several ways of connecting to and managing your Azure storage accounts. ## Prerequisites
Storage Explorer provides several ways to connect to Azure resources:
1. In Storage Explorer, select **View** > **Account Management** or select the **Manage Accounts** button.
- :::image type="content" alt-text="Manage Accounts" source ="./vs-storage-explorer-manage-accounts.png":::
+ :::image type="content" alt-text="Manage Accounts" source ="./media/vs-azure-tools-storage-manage-with-storage-explorer/vs-storage-explorer-manage-accounts.png":::
1. **ACCOUNT MANAGEMENT** now displays all the Azure accounts you're signed in to. To connect to another account, select **Add an account...**. 1. The **Connect to Azure Storage** dialog opens. In the **Select Resource** panel, select **Subscription**.
- :::image type="content" alt-text="Connect dialog" source="./vs-storage-explorer-connect-dialog.png":::
+ :::image type="content" alt-text="Connect dialog" source="./media/vs-azure-tools-storage-manage-with-storage-explorer/vs-storage-explorer-connect-dialog.png":::
1. In the **Select Azure Environment** panel, select an Azure environment to sign in to. You can sign in to global Azure, a national cloud or an Azure Stack instance. Then select **Next**.
- :::image type="content" alt-text="Option to sign in" source="./vs-storage-explorer-connect-environment.png":::
+ :::image type="content" alt-text="Option to sign in" source="./media/vs-azure-tools-storage-manage-with-storage-explorer/vs-storage-explorer-connect-environment.png":::
> [!TIP] > For more information about Azure Stack, see [Connect Storage Explorer to an Azure Stack subscription or storage account](/azure-stack/user/azure-stack-storage-connect-se).
Storage Explorer provides several ways to connect to Azure resources:
1. After you successfully sign in with an Azure account, the account and the Azure subscriptions associated with that account appear under **ACCOUNT MANAGEMENT**. Select the Azure subscriptions that you want to work with, and then select **Apply**.
- :::image type="content" alt-text="Select Azure subscriptions" source="./vs-storage-explorer-account-panel.png":::
+ :::image type="content" alt-text="Select Azure subscriptions" source="./media/vs-azure-tools-storage-manage-with-storage-explorer/vs-storage-explorer-account-panel.png":::
1. **EXPLORER** displays the storage accounts associated with the selected Azure subscriptions.
- :::image type="content" alt-text="Selected Azure subscriptions" source="./vs-storage-explorer-subscription-node.png":::
+ :::image type="content" alt-text="Selected Azure subscriptions" source="./media/vs-azure-tools-storage-manage-with-storage-explorer/vs-storage-explorer-subscription-node.png":::
### Attach to an individual resource
Storage Explorer can also connect to a [local storage emulator](#local-storage-e
To connect to an individual resource, select the **Connect** button in the left-hand toolbar. Then follow the instructions for the resource type you want to connect to. When a connection to a storage account is successfully added, a new tree node will appear under **Local & Attached** > **Storage Accounts**.
web-application-firewall Web Application Firewall Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/web-application-firewall/ag/web-application-firewall-logs.md
You can also connect to your storage account and retrieve the JSON log entries f
#### Analyzing Access logs through GoAccess
-We have published a Resource Manager template that installs and runs the popular [GoAccess](https://goaccess.io/) log analyzer for Application Gateway Access Logs. GoAccess provides valuable HTTP traffic statistics such as Unique Visitors, Requested Files, Hosts, Operating Systems, Browsers, HTTP Status codes and more. For more details, please see the [Readme file in the Resource Manager template folder in GitHub](https://aka.ms/appgwgoaccessreadme).
+We have published a Resource Manager template that installs and runs the popular [GoAccess](https://goaccess.io/) log analyzer for Application Gateway Access Logs. GoAccess provides valuable HTTP traffic statistics such as Unique Visitors, Requested Files, Hosts, Operating Systems, Browsers, HTTP Status codes and more. For more details, please see the [Readme file in the Resource Manager template folder in GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/application-gateway-logviewer-goaccess).
## Next steps