Updates from: 07/26/2021 03:03:43
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Functions For Customizing Application Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
Previously updated : 07/02/2021 Last updated : 07/23/2021
The syntax for Expressions for Attribute Mappings is reminiscent of Visual Basic
## List of Functions
-[Append](#append)      [BitAnd](#bitand)      [CBool](#cbool)      [Coalesce](#coalesce)      [ConvertToBase64](#converttobase64)      [ConvertToUTF8Hex](#converttoutf8hex)      [Count](#count)      [CStr](#cstr)      [DateFromNum](#datefromnum)  [FormatDateTime](#formatdatetime)      [Guid](#guid)      [IIF](#iif)     [InStr](#instr)      [IsNull](#isnull)      [IsNullOrEmpty](#isnullorempty)      [IsPresent](#ispresent)      [IsString](#isstring)      [Item](#item)      [Join](#join)      [Left](#left)      [Mid](#mid)         [NormalizeDiacritics](#normalizediacritics) [Not](#not)      [NumFromDate](#numfromdate)      [RemoveDuplicates](#removeduplicates)      [Replace](#replace)      [SelectUniqueValue](#selectuniquevalue)     [SingleAppRoleAssignment](#singleapproleassignment)     [Split](#split)    [StripSpaces](#stripspaces)      [Switch](#switch)     [ToLower](#tolower)     [ToUpper](#toupper)     [Word](#word)
+[Append](#append)      [BitAnd](#bitand)      [CBool](#cbool)      [CDate](#cdate)      [Coalesce](#coalesce)      [ConvertToBase64](#converttobase64)      [ConvertToUTF8Hex](#converttoutf8hex)      [Count](#count)      [CStr](#cstr)      [DateAdd](#dateadd)      [DateFromNum](#datefromnum)  [FormatDateTime](#formatdatetime)      [Guid](#guid)      [IgnoreFlowIfNullOrEmpty](#ignoreflowifnullorempty)     [IIF](#iif)     [InStr](#instr)      [IsNull](#isnull)      [IsNullOrEmpty](#isnullorempty)      [IsPresent](#ispresent)      [IsString](#isstring)      [Item](#item)      [Join](#join)      [Left](#left)      [Mid](#mid)      [NormalizeDiacritics](#normalizediacritics)       [Not](#not)      [Now](#now)      [NumFromDate](#numfromdate)      [RemoveDuplicates](#removeduplicates)      [Replace](#replace)      [SelectUniqueValue](#selectuniquevalue)     [SingleAppRoleAssignment](#singleapproleassignment)     [Split](#split)    [StripSpaces](#stripspaces)      [Switch](#switch)     [ToLower](#tolower)     [ToUpper](#toupper)     [Word](#word)
### Append
Takes a source string value and appends the suffix to the end of it.
| **suffix** |Required |String |The string that you want to append to the end of the source value. |
-### Append constant suffix to user name
+#### Append constant suffix to user name
Example: If you are using a Salesforce Sandbox, you might need to append an additional suffix to all your user names before synchronizing them. **Expression:**
In other words, it returns 0 in all cases except when the corresponding bits of
`CBool([attribute1] = [attribute2])` Returns True if both attributes have the same value. +
+### CDate
+**Function:**
+`CDate(expression)`
+
+**Description:**
+The CDate function returns a UTC DateTime from a string. DateTime is not a native attribute type but it can be used within date functions such as [FormatDateTime](#formatdatetime) and [DateAdd](#dateadd).
+
+**Parameters:**
+
+| Name | Required/ Repeating | Type | Notes |
+| | | | |
+| **expression** |Required | expression | Any valid string that represents a date/time. For supported formats, refer to [.NET custom date and time format strings](/dotnet/standard/base-types/custom-date-and-time-format-strings). |
+
+**Remarks:**
+The returned string is always in UTC and follows the format **M/d/yyyy h:mm:ss tt**.
+
+**Example 1:** <br>
+`CDate([StatusHireDate])`
+**Sample input/output:**
+
+* **INPUT** (StatusHireDate): "2020-03-16-07:00"
+* **OUTPUT**: "3/16/2020 7:00:00 AM" <-- *Note that UTC equivalent of the above DateTime is returned*
+
+**Example 2:** <br>
+`CDate("2021-06-30+08:00")`
+**Sample input/output:**
+
+* **INPUT**: "2021-06-30+08:00"
+* **OUTPUT**: "6/29/2021 4:00:00 PM" <-- *Note that UTC equivalent of the above DateTime is returned*
+
+**Example 3:** <br>
+`CDate("2009-06-15T01:45:30-07:00")`
+**Sample input/output:**
+
+* **INPUT**: "2009-06-15T01:45:30-07:00"
+* **OUTPUT**: "6/15/2009 8:45:30 AM" <-- *Note that UTC equivalent of the above DateTime is returned*
+ ### Coalesce **Function:**
Returns the first source value that is not NULL. If all arguments are NULL and d
| **source1 … sourceN** | Required | String |Required, variable-number of times. Usually name of the attribute from the source object. | | **defaultValue** | Optional | String | Default value to be used when all source values are NULL. Can be empty string ("").
-### Flow mail value if not NULL, otherwise flow userPrincipalName
+#### Flow mail value if not NULL, otherwise flow userPrincipalName
Example: You wish to flow the mail attribute if it is present. If it is not, you wish to flow the value of userPrincipalName instead. **Expression:**
The CStr function converts a value to a string data type.
Returns "cn=Joe,dc=contoso,dc=com"
+### DateAdd
+**Function:**
+`DateAdd(interval, value, dateTime)`
+
+**Description:**
+Returns a date/time string representing a date to which a specified time interval has been added. The returned date is in the format: **M/d/yyyy h:mm:ss tt**.
+
+**Parameters:**
+
+| Name | Required/ Repeating | Type | Notes |
+| | | | |
+| **interval** |Required | String | Interval of time you want to add. See accepted values below this table. |
+| **value** |Required | Number | The number of units you want to add. It can be positive (to get dates in the future) or negative (to get dates in the past). |
+| **dateTime** |Required | DateTime | DateTime representing date to which the interval is added. |
+
+The **interval** string must have one of the following values:
+ * yyyy Year
+ * q Quarter
+ * m Month
+ * y Day of year
+ * d Day
+ * w Weekday
+ * ww Week
+ * h Hour
+ * n Minute
+ * s Second
+
+**Example 1: Add 7 days to hire date**
+`DateAdd("d", 7, CDate([StatusHireDate]))`
+* **INPUT** (StatusHireDate): 2012-03-16-07:00
+* **OUTPUT**: 3/23/2012 7:00:00 AM
+
+**Example 2: Get a date 10 days prior to hire date**
+`DateAdd("d", -10, CDate([StatusHireDate]))`
+* **INPUT** (StatusHireDate): 2012-03-16-07:00
+* **OUTPUT**: 3/6/2012 7:00:00 AM
+
+**Example 3: Add 2 weeks to hire date**
+`DateAdd("ww", 2, CDate([StatusHireDate]))`
+* **INPUT** (StatusHireDate): 2012-03-16-07:00
+* **OUTPUT**: 3/30/2012 7:00:00 AM
+
+**Example 4: Add 10 months to hire date**
+`DateAdd("m", 10, CDate([StatusHireDate]))`
+* **INPUT** (StatusHireDate): 2012-03-16-07:00
+* **OUTPUT**: 1/16/2013 7:00:00 AM
+
+**Example 5: Add 2 years to hire date**
+`DateAdd("yyyy", 2, CDate([StatusHireDate]))`
+* **INPUT** (StatusHireDate): 2012-03-16-07:00
+* **OUTPUT**: 3/16/2014 7:00:00 AM
+++ ### DateFromNum **Function:** DateFromNum(value)
Takes a date string from one format and converts it into a different format.
| | | | | | **source** |Required |String |Usually name of the attribute from the source object. | | **dateTimeStyles** | Optional | String | Use this to specify the formatting options that customize string parsing for some date and time parsing methods. For supported values, see [DateTimeStyles doc](/dotnet/api/system.globalization.datetimestyles). If left empty, the default value used is DateTimeStyles.RoundtripKind, DateTimeStyles.AllowLeadingWhite, DateTimeStyles.AllowTrailingWhite |
-| **inputFormat** |Required |String |Expected format of the source value. For supported formats, see [/dotnet/standard/base-types/custom-date-and-time-format-strings](/dotnet/standard/base-types/custom-date-and-time-format-strings). |
+| **inputFormat** |Required |String |Expected format of the source value. For supported formats, see [.NET custom date and time format strings](/dotnet/standard/base-types/custom-date-and-time-format-strings). |
| **outputFormat** |Required |String |Format of the output date. |
-### Output date as a string in a certain format
+#### Output date as a string in a certain format
Example: You want to send dates to a SaaS application like ServiceNow in a certain format. You can consider using the following expression. **Expression:**
Guid()
**Description:** The function Guid generates a new random GUID
+**Example:** <br>
+`Guid()`<br>
+Sample output: "1088051a-cd4b-4288-84f8-e02042ca72bc"
++
+### IgnoreFlowIfNullOrEmpty
+**Function:**
+IgnoreFlowIfNullOrEmpty(expression)
+
+**Description:**
+The IgnoreFlowIfNullOrEmpty function instructs the provisioning service to ignore the attribute and drop it from the flow if the enclosed function or attribute is NULL or empty.
+
+**Parameters:**
+
+| Name | Required/ Repeating | Type | Notes |
+| | | | |
+| **expression** | Required | expression | Expression to be evaluated |
+
+**Example 1: Don't flow an attribute if it is null** <br>
+`IgnoreFlowIfNullOrEmpty([department])` <br>
+The above expression will drop the department attribute from the provisioning flow if it is null or empty. <br>
+
+**Example 2: Don't flow an attribute if the expression mapping evaluates to empty string or null** <br>
+Let's say the SuccessFactors attribute *prefix* is mapped to the on-premises Active Directory attribute *personalTitle* using the following expression mapping: <br>
+`IgnoreFlowIfNullOrEmpty(Switch([prefix], "", "3443", "Dr.", "3444", "Prof.", "3445", "Prof. Dr."))` <br>
+The above expression first evaluates the [Switch](#switch) function. If the *prefix* attribute does not have any of the values listed within the *Switch* function, then *Switch* will return an empty string and the attribute *personalTitle* will not be included in the provisioning flow to on-premises Active Directory.
+ ### IIF **Function:**
Requires one string argument. Returns the string, but with any diacritical chara
| **source** |Required |String | Usually a first name or last name attribute. |
-### Remove diacritics from a string
+#### Remove diacritics from a string
Example: You need to replace characters containing accent marks with equivalent characters that don't contain accent marks. **Expression:**
Flips the boolean value of the **source**. If **source** value is True, returns
| | | | | | **source** |Required |Boolean String |Expected **source** values are "True" or "False". | +
+### Now
+**Function:**
+Now()
+
+**Description:**
+The Now function returns a string representing the current UTC DateTime in the format **M/d/yyyy h:mm:ss tt**.
+
+**Example:**
+`Now()` <br>
+Example value returned *7/2/2021 3:33:38 PM*
+ ### NumFromDate **Function:**
Replaces values within a string in a case-sensitive manner. The function behaves
| **replacementAttributeName** |Optional |String |Name of the attribute to be used for replacement value | | **template** |Optional |String |When **template** value is provided, we will look for **oldValue** inside the template and replace it with **source** value. |
-### Replace characters using a regular expression
+#### Replace characters using a regular expression
Example: You need to find characters that match a regular expression value and remove them. **Expression:**
Requires a minimum of two arguments, which are unique value generation rules def
| | | | | | **uniqueValueRule1 … uniqueValueRuleN** |At least 2 are required, no upper bound |String | List of unique value generation rules to evaluate. |
-### Generate unique value for userPrincipalName (UPN) attribute
+#### Generate unique value for userPrincipalName (UPN) attribute
Example: Based on the user's first name, middle name and last name, you need to generate a value for the UPN attribute and check for its uniqueness in the target AD directory before assigning the value to the UPN attribute. **Expression:**
Splits a string into a multi-valued array, using the specified delimiter charact
| **source** |Required |String |**source** value to update. | | **delimiter** |Required |String |Specifies the character that will be used to split the string (example: ",") |
-### Split a string into a multi-valued array
+#### Split a string into a multi-valued array
Example: You need to take a comma-delimited list of strings, and split them into an array that can be plugged into a multi-value attribute like Salesforce's PermissionSets attribute. In this example, a list of permission sets has been populated in extensionAttribute5 in Azure AD. **Expression:**
When **source** value matches a **key**, returns **value** for that **key**. If
| **key** |Required |String |**Key** to compare **source** value with. | | **value** |Required |String |Replacement value for the **source** matching the key. |
-### Replace a value based on predefined set of options
+#### Replace a value based on predefined set of options
Example: You need to define the time zone of the user based on the state code stored in Azure AD. If the state code doesn't match any of the predefined options, use default value of "Australia/Sydney".
If you would like to set existing values in the target system to lower case, [up
| **source** |Required |String |Usually name of the attribute from the source object | | **culture** |Optional |String |The format for the culture name based on RFC 4646 is *languagecode2-country/regioncode2*, where *languagecode2* is the two-letter language code and *country/regioncode2* is the two-letter subculture code. Examples include ja-JP for Japanese (Japan) and en-US for English (United States). In cases where a two-letter language code is not available, a three-letter code derived from ISO 639-2 is used.|
-### Convert generated userPrincipalName (UPN) value to lower case
+#### Convert generated userPrincipalName (UPN) value to lower case
Example: You would like to generate the UPN value by concatenating the PreferredFirstName and PreferredLastName source fields and converting all characters to lower case. `ToLower(Join("@", NormalizeDiacritics(StripSpaces(Join(".", [PreferredFirstName], [PreferredLastName]))), "contoso.com"))`
active-directory Access Panel Collections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/access-panel-collections.md
To create a collection, you must have an Azure AD Premium P1 or P2 license.
The Audit logs record My Apps collections operations, including collection creation end-user actions. The following events are generated from My Apps:
-* Create collection
-* Edit collection
-* Delete collection
-* Launch an application (end user)
+* Create admin collection
+* Edit admin collection
+* Delete admin collection
* Self-service application adding (end user) * Self-service application deletion (end user)
active-directory Services Support Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/services-support-managed-identities.md
Managed identity type | All Generally Available<br>Global Azure Regions | Azure
Refer to the following list to configure managed identity for Azure Data Factory V2 (in regions where available): - [Azure portal](~/articles/data-factory/data-factory-service-identity.md#generate-managed-identity)-- [PowerShell](~/articles/data-factory/data-factory-service-identity.md#generate-managed-identity-using-powershell)-- [REST](~/articles/data-factory/data-factory-service-identity.md#generate-managed-identity-using-rest-api)-- [SDK](~/articles/data-factory/data-factory-service-identity.md#generate-managed-identity-using-sdk) ### Azure Digital Twins
automation Automation Create Standalone Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-create-standalone-account.md
Title: Create a standalone Azure Automation account
description: This article tells how to create a standalone Azure Automation account and a Classic Run As account. Previously updated : 01/07/2021 Last updated : 07/24/2021 # Create a standalone Azure Automation account
-This article shows you how to create an Azure Automation account in the Azure portal. You can use the portal Automation account to evaluate and learn about Automation without using additional management features or integrating with Azure Monitor logs. You can add management features or integrate with Azure Monitor logs for advanced monitoring of runbook jobs at any point in the future.
+This article shows you how to create an Azure Automation account using the Azure portal. You can use the Automation account to evaluate and learn about Automation without using additional management features or integrating with Azure Monitor Logs. You can add management features or integrate with Azure Monitor Logs for advanced monitoring of runbook jobs at any point in the future.
With an Automation account, you can authenticate runbooks by managing resources in either Azure Resource Manager or the classic deployment model. One Automation Account can manage resources across all regions and subscriptions for a given tenant.
With this account created for you, you can quickly start building and deploying
To create or update an Automation account, and to complete the tasks described in this article, you must have the following privileges and permissions: * To create an Automation account, your Azure AD user account must be added to a role with permissions equivalent to the Owner role for `Microsoft.Automation` resources. For more information, see [Role-Based Access Control in Azure Automation](automation-role-based-access-control.md).
-* In the Azure portal, under **Azure Active Directory** > **MANAGE** > **User settings**, if **App registrations** is set to **Yes**, non-administrator users in your Azure AD tenant can [register Active Directory applications](../active-directory/develop/howto-create-service-principal-portal.md#check-azure-subscription-permissions). If **App registrations** is set to **No**, the user who performs this action must have at least an Application Developer role in Azure AD.
+* In the Azure portal, under **Azure Active Directory** > **MANAGE** > **User settings**, if **App registrations** is set to **Yes**, non-administrator users in your Azure AD tenant can [register Active Directory applications](../active-directory/develop/howto-create-service-principal-portal.md#check-azure-subscription-permissions). If **App registrations** is set to **No**, the user who performs this action must be at a minimum, a member of the Application Developer role in Azure AD.
-If you aren't a member of the subscription's Active Directory instance before you're added to the subscription's global Administrator/Coadministrator role, you're added to Active Directory as a guest. In this scenario, you see this message on the Add Automation Account pane: `You do not have permissions to create.`
+If you aren't a member of the subscription's Active Directory instance before you're added to the subscription's global Administrator/Co-Administrator role, you're added to Active Directory as a guest. In this scenario, you see this message on the **Add Automation Account** page: `You do not have permissions to create.`
-If a user is added to the global Administrator/Coadministrator role first, you can remove the user from the subscription's Active Directory instance. You can readd the user to the User role in Active Directory. To verify user roles:
+If a user is added to the global Administrator/Co-administrator role first, you can remove the user from the subscription's Active Directory instance. You can readd the user to the User role in Active Directory. To verify user roles:
-1. In the Azure portal, go to the Azure Active Directory pane.
+1. In the Azure portal, go to the Azure Active Directory page.
1. Select **Users and groups**. 1. Select **All users**. 1. After you select a specific user, select **Profile**. The value of the **User type** attribute under the user's profile should not be **Guest**.
If a user is added to the global Administrator/Coadministrator role first, you c
To create an Azure Automation account in the Azure portal, complete the following steps:
-1. Sign in to the Azure portal with an account that's a member of the subscription Administrators role and a coadministrator of the subscription.
+1. Sign in to the Azure portal with an account that's a member of the subscription Administrators role and a Co-Administrator of the subscription.
1. Select **+ Create a Resource**. 1. Search for **Automation**. In the search results, select **Automation**.
To create an Azure Automation account in the Azure portal, complete the followin
![Add Automation account](media/automation-create-standalone-account/automation-create-automationacct-properties.png) > [!NOTE]
- > If you see the following message in the Add Automation Account pane, your account is not a member of the subscription Administrators role and a coadministrator of the subscription.
+ > If you see the following message in the **Add Automation Account** page, your account is not a member of the subscription Administrators role and a Co-Administrator of the subscription.
> > :::image type="content" source="media/automation-create-standalone-account/create-account-without-perms.png" alt-text="Screenshot of prompt 'You do not have permissions to create a Run As account in Azure Active directory.'":::
-1. In the Add Automation Account pane, enter a name for your new Automation account in the **Name** field. You can't change this name after it's chosen.
+1. On the **Add Automation Account** page, enter a name for your new Automation account in the **Name** field. You can't change this name after it's chosen.
> [!NOTE] > Automation account names are unique per region and resource group. Names for deleted Automation accounts might not be immediately available.
To create an Azure Automation account in the Azure portal, complete the followin
1. For the **Create Azure Run As account** option, ensure that **Yes** is selected, and then click **Create**. > [!NOTE]
- > If you choose not to create the Run As account by selecting **No** for **Create Azure Run As account**, a message appears in the Add Automation Account pane. Although the account is created in the Azure portal, the account doesn't have a corresponding authentication identity in your classic deployment model subscription or in the Azure Resource Manager subscription directory service. Therefore, the Automation account doesn't have access to resources in your subscription. This prevents any runbooks that reference this account from being able to authenticate and perform tasks against resources in those deployment models.
+ > If you choose not to create the Run As account by selecting **No** for **Create Azure Run As account**, a message appears in the **Add Automation Account** page. Although the account is created in the Azure portal, the account doesn't have a corresponding authentication identity in your classic deployment model subscription or in the Azure Resource Manager subscription directory service. Therefore, the Automation account doesn't have access to resources in your subscription. This prevents any runbooks that reference this account from being able to authenticate and perform tasks against resources in those deployment models.
> > :::image type="content" source="media/automation-create-standalone-account/create-account-decline-create-runas-msg.png" alt-text="Screenshot of prompt with message 'You have chosen not to create a Run As Account.'"::: >
automation Automation Secure Asset Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-secure-asset-encryption.md
To use customer-managed keys with an Automation account, your Automation account
### Using PowerShell
-Use PowerShell cmdlet [Set-AzAutomationAccount](/powershell/module/az.automation/set-azautomationaccount) to modify an existing Azure Automation account. The `-AssignSystemIdentity` parameter generates and assigns a new system-assigned identity for the Automation account to use with other services like Azure Key Vault. For more information, see [What are managed identities for Azure resources?](/active-directory/managed-identities-azure-resources/overview) and [About Azure Key Vault](/key-vault/general/overview). Execute the following code:
+Use PowerShell cmdlet [Set-AzAutomationAccount](/powershell/module/az.automation/set-azautomationaccount) to modify an existing Azure Automation account. The `-AssignSystemIdentity` parameter generates and assigns a new system-assigned identity for the Automation account to use with other services like Azure Key Vault. For more information, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md) and [About Azure Key Vault](../key-vault/general/overview.md). Execute the following code:
```powershell # Revise variables with your actual values.
automation Disable Managed Identity For Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/disable-managed-identity-for-automation.md
Title: Disable system-assigned managed identity for Azure Automation account (pr
description: This article explains how to disable a system-assigned managed identity for an Azure Automation account. Previously updated : 07/13/2021 Last updated : 07/24/2021
Syntax and example steps are provided below.
### Request body
-The following request body disables the system-assigned managed identity and removes any user-assigned managed identities.
-
-PATCH
+The following request body disables the system-assigned managed identity and removes any user-assigned managed identities using the HTTP **PATCH** method.
```json {
PATCH
```
-If there are multiple user-assigned identities defined, to retain them and only remove the system-assigned identity you need to specify each user-assigned identity using comma-delimited list as in the following example:
-
-PATCH
+If there are multiple user-assigned identities defined, to retain them and only remove the system-assigned identity, you need to specify each user-assigned identity using comma-delimited list. The example below uses the HTTP **PATCH** method.
```json {
automation Enable Managed Identity For Automation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/enable-managed-identity-for-automation.md
Title: Using a system-assigned managed identity for an Azure Automation account
description: This article describes how to set up managed identity for Azure Automation accounts. Previously updated : 07/09/2021 Last updated : 07/24/2021
Syntax and example steps are provided below.
#### Syntax
-The body syntax below enables a system-assigned managed identity to an existing Automation account. However, this syntax will remove any existing user-assigned managed identities associated with the Automation account.
-
-PATCH
+The body syntax below enables a system-assigned managed identity to an existing Automation account using the HTTP **PATCH** method. However, this syntax will remove any existing user-assigned managed identities associated with the Automation account.
```json {
PATCH
} ```
-If there are multiple user-assigned identities defined, to retain them and only remove the system-assigned identity you need to specify each user-assigned identity using comma-delimited list as in the following example:
-
-PATCH
+If there are multiple user-assigned identities defined, to retain them and only remove the system-assigned identity, you need to specify each user-assigned identity using comma-delimited list. The example below uses the HTTP **PATCH** method.
```json {
Perform the following steps.
1. Copy and paste the body syntax into a file named `body_sa.json`. Save the file on your local machine or in an Azure storage account.
-1. Revise the variable value below and then execute.
+1. Update the variable value below and then execute.
```powershell $file = "path\body_sa.json"
Perform the following steps.
1. Revise the syntax of the template above to use your Automation account and save it to a file named `template_sa.json`.
-1. Revise the variable value below and then execute.
+1. Update the variable value below and then execute.
```powershell $templateFile = "path\template_sa.json"
$AzureContext = Set-AzContext -SubscriptionId "SubscriptionID"
## Generate an access token without using Azure cmdlets For HTTP Endpoints make sure of the following.+ - The metadata header must be present and should be set to "true". - A resource must be passed along with the request, as a query parameter for a GET request and as form data for a POST request. - The X-IDENTITY-HEADER should be set to the value of the environment variable IDENTITY_HEADER for Hybrid Runbook Workers. - Content Type for the Post request must be 'application/x-www-form-urlencoded'.
-### Get Access token for System Assigned Identity using Http Get
+### Get Access token for System Assigned Identity using HTTP Get
```powershell $resource= "?resource=https://management.azure.com/"
$accessToken = Invoke-RestMethod -Uri $url -Method 'GET' -Headers $Headers
Write-Output $accessToken.access_token ```
-### Get Access token for System Assigned Identity using Http Post
+### Get Access token for System Assigned Identity using HTTP Post
```powershell $url = $env:IDENTITY_ENDPOINT
automation Remove User Assigned Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/remove-user-assigned-identity.md
Title: Remove user-assigned managed identity for Azure Automation account (previ
description: This article explains how to remove a user-assigned managed identity for an Azure Automation account. Previously updated : 07/13/2021 Last updated : 07/24/2021
You can remove a user-assigned managed identity from the Automation account by u
### Request body
-Scenario: System-assigned managed identity is enabled or is to be enabled. One of many user-assigned managed identities is to be removed. This example removes a user-assigned managed identity named `firstIdentity`.
-
-PATCH
+Scenario: System-assigned managed identity is enabled or is to be enabled. One of many user-assigned managed identities is to be removed. This example removes a user-assigned managed identity named `firstIdentity` using the HTTP **PATCH** method.
```json {
PATCH
} ```
-Scenario: System-assigned managed identity is enabled or is to be enabled. All user-assigned managed identities are to be removed.
-
-PUT
+Scenario: System-assigned managed identity is enabled or is to be enabled. All user-assigned managed identities are to be removed using the HTTP **PUT** method.
```json {
PUT
} ```
-Scenario: System-assigned managed identity is disabled or is to be disabled. One of many user-assigned managed identities is to be removed. This example removes a user-assigned managed identity named `firstIdentity`.
-
-PATCH
+Scenario: System-assigned managed identity is disabled or is to be disabled. One of many user-assigned managed identities is to be removed. This example removes a user-assigned managed identity named `firstIdentity` using the HTTP **PATCH** method.
```json {
PATCH
```
-Scenario: System-assigned managed identity is disabled or is to be disabled. All user-assigned managed identities are to be removed.
-
-PUT
+Scenario: System-assigned managed identity is disabled or is to be disabled. All user-assigned managed identities are to be removed using the HTTP **PUT** method.
```json {
azure-arc Tutorial Arc Enabled Open Service Mesh https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/tutorial-arc-enabled-open-service-mesh.md
Title: Azure Arc-enabled Open Service Mesh (Preview)
description: Open Service Mesh (OSM) extension on Arc enabled Kubernetes cluster Previously updated : 05/24/2021 Last updated : 07/23/2021
You should see a JSON output similar to the output below:
"version": "0.8.4" } ```- ## OSM controller configuration
+OSM deploys a MeshConfig resource `osm-mesh-config` as a part of its control plane in arc-osm-system namespace. The purpose of this MeshConfig is to provide the mesh owner/operator the ability to update some of the mesh configurations based on their needs. to view the default values, use the following command.
+
+```azurecli-interactive
+kubectl describe meshconfig osm-mesh-config -n arc-osm-system
+```
+The output would show the default values:
+
+```azurecli-interactive
+Certificate:
+ Service Cert Validity Duration: 24h
+ Feature Flags:
+ Enable Egress Policy: true
+ Enable Multicluster Mode: false
+ Enable WASM Stats: true
+ Observability:
+ Enable Debug Server: false
+ Osm Log Level: info
+ Tracing:
+ Address: jaeger.osm-system.svc.cluster.local
+ Enable: false
+ Endpoint: /api/v2/spans
+ Port: 9411
+ Sidecar:
+ Config Resync Interval: 0s
+ Enable Privileged Init Container: false
+ Envoy Image: mcr.microsoft.com/oss/envoyproxy/envoy:v1.18.3
+ Init Container Image: mcr.microsoft.com/oss/openservicemesh/init:v0.9.1
+ Log Level: error
+ Max Data Plane Connections: 0
+ Resources:
+ Traffic:
+ Enable Egress: false
+ Enable Permissive Traffic Policy Mode: true
+ Inbound External Authorization:
+ Enable: false
+ Failure Mode Allow: false
+ Stat Prefix: inboundExtAuthz
+ Timeout: 1s
+ Use HTTPS Ingress: false
+```
+Refer to the [Config API reference](https://docs.openservicemesh.io/docs/apidocs/config/v1alpha1) for more information. Notice that **spec.traffic.enablePermissiveTrafficPolicyMode** is set to **true**. Permissive traffic policy mode in OSM is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services.
+
+### Making changes to OSM controller configuration
+
+> [!NOTE]
+> Values in the MeshConfig `osm-mesh-config` are persisted across upgrades.
+
+Changes to `osm-mesh-config` can be made using the kubectl patch command. In the following example, the permissive traffic policy mode is changed to false.
+
+```azurecli-interactive
+kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":false}}}' --type=merge
+```
+
+If an incorrect value is used, validations on the MeshConfig CRD will prevent the change with an error message explaining why the value is invalid. For example, the below command shows what happens if we patch enableEgress to a non-boolean value.
+
+```azurecli-interactive
+kubectl patch meshconfig osm-mesh-config -n arc-osm-system -p '{"spec":{"traffic":{"enableEgress":"no"}}}' --type=merge
+
+# Validations on the CRD will deny this change
+The MeshConfig "osm-mesh-config" is invalid: spec.traffic.enableEgress: Invalid value: "string": spec.traffic.enableEgress in body must be of type boolean: "string"
+```
+
+## OSM controller configuration (version v0.8.4)
Currently you can access and configure the OSM controller configuration via the ConfigMap. To view the OSM controller configuration settings, query the `osm-config` ConfigMap via `kubectl` to view its configuration settings.
Output:
} ```
-Read [OSM ConfigMap documentation](https://release-v0-8.docs.openservicemesh.io/docs/osm_config_map/) to understand each of the available configurations. Notice the **permissive_traffic_policy_mode** is configured to **true**. Permissive traffic policy mode in OSM is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh and programs traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services.
+Read [OSM ConfigMap documentation](https://release-v0-8.docs.openservicemesh.io/docs/osm_config_map/) to understand each of the available configurations.
-### Making changes to OSM ConfigMap
-
-To make changes to the OSM ConfigMap, use the following guidance:
+To make changes to the OSM ConfigMap for version v0.8.4, use the following guidance:
1. Copy and save the changes you wish to make in a JSON file. In this example, we are going to change the permissive_traffic_policy_mode from true to false. Each time you make a change to `osm-config`, you will have to provide the full list of changes (compared to the default `osm-config`) in a JSON file. ```json
More information about onboarding services can be found [here](https://docs.open
### Configure OSM with Service Mesh Interface (SMI) policies
-You can start with a [demo application](https://release-v0-8.docs.openservicemesh.io/docs/install/manual_demo/) or use your test environment to try out SMI policies.
+You can start with a [demo application](https://docs.openservicemesh.io/docs/getting_started/manual_demo/#deploy-applications) or use your test environment to try out SMI policies.
> [!NOTE] > Ensure that the version of the bookstore application you run matches the version of the OSM extension installed on your cluster. Ex: if you are using v0.8.4 of the OSM extension, use the bookstore demo from release-v0.8 branch of OSM upstream repository. ### Configuring your own Jaeger, Prometheus and Grafana instances
-The OSM extension has [Jaeger](https://www.jaegertracing.io/docs/getting-started/), [Prometheus](https://prometheus.io/docs/prometheus/latest/installation/) and [Grafana](https://grafana.com/docs/grafana/latest/installation/) installation disabled by default so that users can integrate OSM with their own running instances of those tools instead. To integrate with your own instances, check the following documentation:
+The OSM extension does not install add-ons like [Jaeger](https://www.jaegertracing.io/docs/getting-started/), [Prometheus](https://prometheus.io/docs/prometheus/latest/installation/) and [Grafana](https://grafana.com/docs/grafana/latest/installation/) so that users can integrate OSM with their own running instances of those tools instead. To integrate with your own instances, check the following documentation:
-- [BYO-Jaeger instance](https://github.com/openservicemesh/osm-docs/blob/main/content/docs/tasks/observability/tracing.md#byo-bring-your-own)
- - To set the values described in this documentation, you will need to update the `osm-config` ConfigMap with the following settings:
- ```json
- {
- "osm.OpenServiceMesh.tracing.enable": "true",
- "osm.OpenServiceMesh.tracing.address": "<tracing server hostname>",
- "osm.OpenServiceMesh.tracing.port": "<tracing server port>",
- "osm.OpenServiceMesh.tracing.endpoint": "<tracing server endpoint>",
- }
- ```
- Use the guidance available in [this section](#making-changes-to-osm-configmap) to push these settings to osm-config.
-- [BYO-Prometheus instance](https://github.com/openservicemesh/osm/blob/release-v0.8/docs/content/docs/tasks_usage/metrics.md#byo-bring-your-own)-- [BYO-Grafana dashboard](https://github.com/openservicemesh/osm/blob/release-v0.8/docs/content/docs/tasks_usage/metrics.md#importing-dashboards-on-a-byo-grafana-instance)
+> [!NOTE]
+> Use the commands provided in the OSM GitHub documentation with caution. Ensure that you use the correct namespace name 'arc-osm-system' when making changes to `osm-mesh-config`.
+
+- [BYO-Jaeger instance](https://docs.openservicemesh.io/docs/tasks/observability/tracing/#byo-bring-your-own)
+- [BYO-Prometheus instance](https://docs.openservicemesh.io/docs/tasks/observability/metrics/#byo-prometheus)
+- [BYO-Grafana dashboard](https://docs.openservicemesh.io/docs/tasks/observability/metrics/#importing-dashboards-on-a-byo-grafana-instance)
## Monitoring application using Azure Monitor and Applications Insights
Both Azure Monitor and Azure Application Insights helps you maximize the availab
Arc enabled Open Service Mesh will have deep integrations into both of these Azure services, and provide a seemless Azure experience for viewing and responding to critical KPIs provided by OSM metrics. Follow the steps below to allow Azure Monitor to scrape prometheus endpoints for collecting application metrics.
-1. Ensure that prometheus_scraping is set to true in the OSM ConfigMap.
+1. Ensure that prometheus_scraping is set to true in the `osm-mesh-config`.
2. Ensure that the application namespaces that you wish to be monitored are onboarded to the mesh. Follow the guidance [available here](#onboard-namespaces-to-the-service-mesh).
Arc enabled Open Service Mesh will have deep integrations into both of these Azu
kubectl apply -f container-azm-ms-osmconfig.yaml ```
-It may take upto 15 minutes for the metrics to show up in Log Analytics. You can try querying the InsightsMetrics table.
+It may take up to 15 minutes for the metrics to show up in Log Analytics. You can try querying the InsightsMetrics table.
```azurecli-interactive InsightsMetrics
Read more about integration with Azure Monitor [here](https://github.com/microso
## Upgrade the OSM extension instance to a specific version
-> [!NOTE]
-> Upgrading the OSM add-on could potentially overwrite user-configured values in the OSM ConfigMap.
-
-To prevent any previous ConfigMap changes from being overwritten, pass in the same configuration settings file used to make those edits.
- There may be some downtime of the control plane during upgrades. The data plane will only be affected during CRD upgrades. ### Supported Upgrades
The OSM extension can be upgraded up to the next minor version. Downgrades and m
The OSM extension cannot be upgraded to a new version if that version contains CRD version updates without deleting the existing CRDs first. You can check if an OSM upgrade also includes CRD version updates by checking the CRD Updates section of the [OSM release notes](https://github.com/openservicemesh/osm/releases).
-Check the [OSM CRD Upgrades documentation](https://github.com/openservicemesh/osm/blob/release-v0.8/docs/content/docs/upgrade_guide.md#crd-upgrades) to prepare your cluster for such an upgrade. Make sure to back up your Custom Resources prior to deleting the CRDs so that they can be easily recreated after upgrading. Afterwards, follow the upgrade instructions using az k8s-extension in this guide instead of using Helm or the OSM CLI.
+Make sure to back up your Custom Resources prior to deleting the CRDs so that they can be easily recreated after upgrading. Afterwards, follow the upgrade instructions captured below.
> [!NOTE] > Upgrading the CRDs will affect the data plane as the SMI policies won't exist between the time they're deleted and the time they're created again. ### Upgrade instructions
-1. [Delete outdated CRDs and install updated CRDs](https://github.com/openservicemesh/osm/blob/release-v0.8/docs/content/docs/upgrade_guide.md#crd-upgrades) if necessary
- - Back up existing Custom Resources as a reference for when you create new ones.
- - Install the updated CRDs and Custom Resources prior to installing the new extension version.
+1. Delete the old CRDs and custom resources (Run from the root of the [OSM repo](https://github.com/openservicemesh/osm)). Ensure that the tag of the [OSM CRDs](https://github.com/openservicemesh/osm/tree/main/charts/osm/crds) corresponds to the new version of the chart.
+ ```azurecli-interactive
+ kubectl delete --ignore-not-found --recursive -f ./charts/osm/crds/
+
+2. Install the updated CRDs.
+ ```azurecli-interactive
+ kubectl apply -f charts/osm/crds/
+ ```
-2. Set the new chart version as an environment variable:
+3. Set the new chart version as an environment variable:
```azurecli-interactive export VERSION=<chart version> ```
-3. Run az k8s-extension create with the new chart version
+4. Run az k8s-extension create with the new chart version
```azurecli-interactive az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --release-train pilot --name osm --version $VERSION --configuration-settings-file $SETTINGS_FILE ```
-4. Recreate Custom Resources using new CRDs if necessary
+5. Recreate Custom Resources using new CRDs
## Uninstall Arc enabled Open Service Mesh
azure-monitor Monitor Virtual Machine Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine-configure.md
Title: Monitor virtual machines with Azure Monitor - Configure monitoring
-description: Describes how to configure virtual machines for monitoring in Azure Monitor. Monitor virtual machines and their workloads with Azure Monitor scenario.
-
+ Title: 'Monitor virtual machines with Azure Monitor: Configure monitoring'
+description: Learn how to configure virtual machines for monitoring in Azure Monitor. Monitor virtual machines and their workloads with an Azure Monitor scenario.
+
Last updated 06/21/2021
-# Monitor virtual machines with Azure Monitor - Configure monitoring
-This article is part of the [Monitoring virtual machines and their workloads in Azure Monitor scenario](monitor-virtual-machine.md). It describes how to configure monitoring of your Azure and hybrid virtual machines in Azure Monitor.
+# Monitor virtual machines with Azure Monitor: Configure monitoring
+This article is part of the scenario [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It describes how to configure monitoring of your Azure and hybrid virtual machines in Azure Monitor.
-These are the most common Azure Monitor features to monitor the virtual machine host and its guest operating system. Depending on your particular environment and business requirements, you may not want to implement all features enabled by this configuration. Each section will describe what features are enabled by that configuration and whether it will potentially result in additional cost. This will help you to assess whether to perform each step of the configuration. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for detailed pricing information.
-
-A general description of each feature enabled by this configuration is provided in the [overview for scenario](monitor-virtual-machine.md). That article also includes links to content providing a detailed description of each feature to further help you assess your requirements.
+This article discusses the most common Azure Monitor features to monitor the virtual machine host and its guest operating system. Depending on your particular environment and business requirements, you might not want to implement all features enabled by this configuration. Each section describes what features are enabled by that configuration and whether it potentially results in additional cost. This information will help you assess whether to perform each step of the configuration. For detailed pricing information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+A general description of each feature enabled by this configuration is provided in the [overview for the scenario](monitor-virtual-machine.md). That article also includes links to content that provides a detailed description of each feature to further help you assess your requirements.
> [!NOTE]
-> The features enabled by the configuration support monitoring workloads running on your virtual machine, but you'll typically require additional configuration depending your particular workloads. See [Workload monitoring](monitor-virtual-machine-workloads.md) for details on this additional configuration.
+> The features enabled by the configuration support monitoring workloads running on your virtual machine. But depending on your particular workloads, you'll typically require additional configuration. For details on this additional configuration, see [Workload monitoring](monitor-virtual-machine-workloads.md).
## Configuration overview The following table lists the steps that must be performed for this configuration. Each one links to the section with the detailed description of that configuration step. | Step | Description | |:|:|
-| [No configuration](#no-configuration) | Activity log and platform metrics for the Azure virtual machine hosts are automatically collected with no configuration. |
-| [Create and prepare Log Analytics workspace](#create-and-prepare-log-analytics-workspace) | Create a Log Analytics workspace and configure it for VM insights. Depending on your particular requirements, you may configure multiple workspaces. |
-| [Send Activity log to Log Analytics workspace](#send-activity-log-to-log-analytics-workspace) | Send the Activity log to the workspace to analyze it with other log data. |
-| [Prepare hybrid machines](#prepare-hybrid-machines) | Hybrid machines either need the Arc-enabled servers agent installed so they can be managed like Azure virtual machines or have their agents installed manually. |
-| [Enable VM insights on machines](#enable-vm-insights-on-machines) | Onboard machines to VM insights, which deploys required agents and begins collecting data from guest operating system. |
+| [No configuration](#no-configuration) | Activity log and platform metrics for the Azure virtual machine hosts are automatically collected with no configuration. |
+| [Create and prepare Log Analytics workspace](#create-and-prepare-a-log-analytics-workspace) | Create a Log Analytics workspace and configure it for VM insights. Depending on your particular requirements, you might configure multiple workspaces. |
+| [Send Activity log to Log Analytics workspace](#send-an-activity-log-to-a-log-analytics-workspace) | Send the Activity log to the workspace to analyze it with other log data. |
+| [Prepare hybrid machines](#prepare-hybrid-machines) | Hybrid machines either need the server agents enabled by Azure Arc installed so they can be managed like Azure virtual machines or must have their agents installed manually. |
+| [Enable VM insights on machines](#enable-vm-insights-on-machines) | Onboard machines to VM insights, which deploys required agents and begins collecting data from the guest operating system. |
| [Send guest performance data to Metrics](#send-guest-performance-data-to-metrics) |Install the Azure Monitor agent to send performance data to Azure Monitor Metrics. | -- ## No configuration
-Azure Monitor provides a basic level of monitoring for Azure virtual machines at no cost and with no configuration. Platform metrics for Azure virtual machines include important metrics such as CPU, network, and disk utilization and can be viewed on the [Overview page](monitor-virtual-machine-analyze.md#single-machine-experience) for the machine in the Azure portal. The Activity log is also collected automatically and includes the recent activity of the machine such as any configuration changers and when it's been stopped and started.
+Azure Monitor provides a basic level of monitoring for Azure virtual machines at no cost and with no configuration. Platform metrics for Azure virtual machines include important metrics such as CPU, network, and disk utilization. They can be viewed on the [Overview page](monitor-virtual-machine-analyze.md#single-machine-experience) for the machine in the Azure portal. The Activity log is also collected automatically and includes the recent activity of the machine, such as any configuration changes and when it was stopped and started.
-## Create and prepare Log Analytics workspace
-You require at least one Log Analytics workspace to support VM insights and to collect telemetry from the Log Analytics agent. There is no cost for the workspace, but you do incur ingestion and retention costs when you collect data. See [Manage usage and costs with Azure Monitor Logs](../logs/manage-cost-storage.md) for details.
+## Create and prepare a Log Analytics workspace
+You require at least one Log Analytics workspace to support VM insights and to collect telemetry from the Log Analytics agent. There's no cost for the workspace, but you do incur ingestion and retention costs when you collect data. For more information, see [Manage usage and costs with Azure Monitor Logs](../logs/manage-cost-storage.md).
-Many environments will use a single workspace for all their virtual machines and other Azure resources they monitor. You can even share a workspace used by [Azure Security Center and Azure Sentinel](monitor-virtual-machine-security.md), although many customers choose to segregate their availability and performance telemetry from security data. If you're just getting started with Azure Monitor, then start with a single workspace and consider creating additional workspaces as your requirements evolve.
+Many environments use a single workspace for all their virtual machines and other Azure resources they monitor. You can even share a workspace used by [Azure Security Center and Azure Sentinel](monitor-virtual-machine-security.md), although many customers choose to segregate their availability and performance telemetry from security data. If you're getting started with Azure Monitor, start with a single workspace and consider creating more workspaces as your requirements evolve.
-See [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md) for complete details on logic that you should consider for designing a workspace configuration.
+For complete details on logic that you should consider for designing a workspace configuration, see [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md).
### Multihoming agents
-Multihoming refers to a virtual machine that connects to multiple workspaces. There typically is little reason to multihome agents for Azure Monitor alone. Having an agent send data to multiple workspaces will most likely create duplicate data in each workspace, increasing your overall cost. You can combine data from multiple workspaces using [cross workspace queries](../logs/cross-workspace-query.md) and [workbooks](../visualizations/../visualize/workbooks-overview.md).
+Multihoming refers to a virtual machine that connects to multiple workspaces. Typically, there's little reason to multihome agents for Azure Monitor alone. Having an agent send data to multiple workspaces most likely creates duplicate data in each workspace, which increases your overall cost. You can combine data from multiple workspaces by using [cross-workspace queries](../logs/cross-workspace-query.md) and [workbooks](../visualizations/../visualize/workbooks-overview.md).
-One reason you may consider multihoming though is an environment with Azure Security Center or Azure Sentinel stored in a separate workspace than Azure Monitor. A machine being monitored by each service would need to send data to each workspace. The Windows agent supports this scenario since it can send to up to four workspaces. The Linux agent though can currently only send to a single workspace. If you want to use have Azure Monitor and Azure Security Center or Azure Sentinel monitor a common set of Linux machines, then the services would need to share the same workspace.
+One reason you might consider multihoming, though, is if you have an environment with Azure Security Center or Azure Sentinel stored in a workspace that's separate from Azure Monitor. A machine being monitored by each service needs to send data to each workspace. The Windows agent supports this scenario because it can send to up to four workspaces. The Linux agent can currently send to only a single workspace. If you want to have Azure Monitor and Azure Security Center or Azure Sentinel monitor a common set of Linux machines, the services need to share the same workspace.
-Another other reason you may multihome your agents is in a [hybrid monitoring model](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview#hybrid-cloud-monitoring) where you use Azure Monitor and Operations Manager together to monitor the same machines. The Log Analytics agent and the Microsoft Management Agent for Operations Manager are the same agent, just sometimes referred to with different names.
+Another reason you might multihome your agents is if you're using a [hybrid monitoring model](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview#hybrid-cloud-monitoring). In this model, you use Azure Monitor and Operations Manager together to monitor the same machines. The Log Analytics agent and the Microsoft Management Agent for Operations Manager are the same agent. Sometimes they're referred to with different names.
### Workspace permissions
-The access mode of the workspace defines which users are able to access different sets of data. See [Manage access to log data and workspaces in Azure Monitor](../logs/manage-access.md) for details on defining your access mode and configuring permissions. If you're just getting started with Azure Monitor, then consider accepting the defaults when you create your workspace and configure its permissions later.
-
+The access mode of the workspace defines which users can access different sets of data. For details on how to define your access mode and configure permissions, see [Manage access to log data and workspaces in Azure Monitor](../logs/manage-access.md). If you're just getting started with Azure Monitor, consider accepting the defaults when you create your workspace and configure its permissions later.
### Prepare the workspace for VM insights
-You must prepare each workspace for VM insights before enabling monitoring for any virtual machines. This installs required solutions that support data collection from the Log Analytics agent. This configuration only needs to be completed once for each workspace. See [Enable VM insights overview](vminsights-enable-overview.md) for details on this configuration using the Azure portal in addition to other methods.
+Prepare each workspace for VM insights before you enable monitoring for any virtual machines. This step installs required solutions that support data collection from the Log Analytics agent. You complete this configuration only once for each workspace. For details on this configuration by using the Azure portal in addition to other methods, see [Enable VM insights overview](vminsights-enable-overview.md).
+## Send an Activity log to a Log Analytics workspace
+You can view the platform metrics and Activity log collected for each virtual machine host in the Azure portal. Send this data into the same Log Analytics workspace as VM insights to analyze it with the other monitoring data collected for the virtual machine. You might have already done this task when you configured monitoring for other Azure resources because there's a single Activity log for all resources in an Azure subscription.
-## Send Activity log to Log Analytics workspace
-You can view the platform metrics and Activity log collected for each virtual machine host in the Azure portal. Send this data into the same Log Analytics workspace as VM insights to analyze it with the other monitoring data collected for the virtual machine. You may have already done this when configuring monitoring for other Azure resources since there is a single Activity log for all resources in an Azure subscription.
-
-There is no cost for ingestion or retention of Activity log data. See [Create diagnostic settings](../essentials/diagnostic-settings.md) for details on creating a diagnostic setting to send the Activity log to your Log Analytics workspace.
+There's no cost for ingestion or retention of Activity log data. For details on how to create a diagnostic setting to send the Activity log to your Log Analytics workspace, see [Create diagnostic settings](../essentials/diagnostic-settings.md).
### Network requirements
-The Log Analytics agent for both Linux and Windows communicates outbound to the Azure Monitor service over TCP port 443. The Dependency agent uses the Log Analytics agent for all communication, so it doesn't require any additional ports. See [Network requirements](../agents/log-analytics-agent.md#network-requirements) for details on configuring your firewall and proxy.
+The Log Analytics agent for both Linux and Windows communicates outbound to the Azure Monitor service over TCP port 443. The Dependency agent uses the Log Analytics agent for all communication, so it doesn't require any another ports. For details on how to configure your firewall and proxy, see [Network requirements](../agents/log-analytics-agent.md#network-requirements).
### Gateway
-The Log Analytics gateway allows you to channel communications from your on-premises machines through a single gateway. You can't use Azure Arc-enabled servers agent with the Log Analytics gateway though, so if your security policy requires a gateway, then you'll need to manually install the agents for your on-premises machines. See [Log Analytics gateway](../agents/gateway.md) for details on configuring and using the Log Analytics gateway.
+With the Log Analytics gateway, you can channel communications from your on-premises machines through a single gateway. You can't use the Azure ArcΓÇôenabled server agents with the Log Analytics gateway though. If your security policy requires a gateway, you'll need to manually install the agents for your on-premises machines. For details on how to configure and use the Log Analytics gateway, see [Log Analytics gateway](../agents/gateway.md).
-### Azure Private link
-Azure Private Link allows you to create a private endpoint for your Log Analytics workspace. Once configured, any connections to the workspace must be made through this private endpoint. Private link works using DNS overrides, so thereΓÇÖs no configuration requirement on individual agents. See [Use Azure Private Link to securely connect networks to Azure Monitor](../logs/private-link-security.md) for details on Azure private link.
+### Azure Private Link
+By using Azure Private Link, you can create a private endpoint for your Log Analytics workspace. After it's configured, any connections to the workspace must be made through this private endpoint. Private Link works by using DNS overrides, so there's no configuration requirement on individual agents. For details on Private Link, see [Use Azure Private Link to securely connect networks to Azure Monitor](../logs/private-link-security.md).
## Prepare hybrid machines
-A hybrid machine is ay machine not running in Azure. This is a virtual machine running in another cloud or hosted provide or a virtual or physical machine running on-premises in your data center. Use [Azure Arc enabled servers](../../azure-arc/servers/overview.md) on hybrid machines so you can manage them similar to your Azure virtual machines. VM insights in Azure Monitor allows you to use the same process to enable monitoring for Azure Arc enabled servers as you do for Azure virtual machines. See [Plan and deploy Arc-enabled servers](../../azure-arc/servers/plan-at-scale-deployment.md) for a complete guide on preparing your hybrid machines for Azure. This includes enabling individual machines and using [Azure Policy](../../governance/policy/overview.md) to enable your entire hybrid environment at scale.
+A hybrid machine is any machine not running in Azure. It's a virtual machine running in another cloud or hosted provide or a virtual or physical machine running on-premises in your datacenter. Use [Azure ArcΓÇôenabled servers](../../azure-arc/servers/overview.md) on hybrid machines so you can manage them similarly to your Azure virtual machines. You can use VM insights in Azure Monitor to use the same process to enable monitoring for Azure ArcΓÇôenabled servers as you do for Azure virtual machines. For a complete guide on preparing your hybrid machines for Azure, see [Plan and deploy Azure ArcΓÇôenabled servers](../../azure-arc/servers/plan-at-scale-deployment.md). This task includes enabling individual machines and using [Azure Policy](../../governance/policy/overview.md) to enable your entire hybrid environment at scale.
+
+There's no more cost for Azure ArcΓÇôenabled servers, but there might be some cost for different options that you enable. For details, see [Azure Arc pricing](https://azure.microsoft.com/pricing/details/azure-arc/). There is a cost for the data collected in the workspace after the hybrid machines are enabled for VM insights.
-There is no additional cost for Azure Arc-enabled servers, but there may be some cost for different options that you enable. See [Azure Arc pricing](https://azure.microsoft.com/pricing/details/azure-arc/) for details. There will be a cost for the data collected in the workspace once the hybrid machines are enabled for VM insights.
+### Machines that can't use Azure ArcΓÇôenabled servers
+If you have any hybrid machines that match the following criteria, they won't be able to use Azure ArcΓÇôenabled servers:
-### Machines that can't use Azure Arc-enabled servers
-If you have any hybrid machines that match the following criteria, they won't be able to use Azure Arc-enabled servers. You still can monitor these machines with Azure Monitor, but you need to manually install their agents. See [Enable VM insights for a hybrid virtual machine](vminsights-enable-hybrid.md) to manually install the Log Analytics agent and Dependency agent on those hybrid machines.
+- The operating system of the machine isn't supported by the server agents enabled by Azure Arc. For more information, see [Supported operating systems](../../azure-arc/servers/agent-overview.md#prerequisites).
+- Your security policy doesn't allow machines to connect directly to Azure. The Log Analytics agent can use the [Log Analytics gateway](../agents/gateway.md) whether or not Azure ArcΓÇôenabled servers are installed. The server agents enabled by Azure Arc must connect directly to Azure.
-- The operating system of the machine is not supported by Azure Arc-enabled servers agent. See [Supported operating systems](../../azure-arc/servers/agent-overview.md#prerequisites).-- Your security policy does not allow machines to connect directly to Azure. The Log Analytics agent can use the [Log Analytics gateway](../agents/gateway.md) whether or not Azure Arc-enabled servers is installed. The Arc-enabled servers agent though must connect directly to Azure.
+You still can monitor these machines with Azure Monitor, but you need to manually install their agents. To manually install the Log Analytics agent and Dependency agent on those hybrid machines, see [Enable VM insights for a hybrid virtual machine](vminsights-enable-hybrid.md).
> [!NOTE]
-> Private endpoint for Arc-enabled servers is currently in public preview. This allows your hybrid machines to securely connect to Azure using a private IP address from your VNet.
+> The private endpoint for Azure ArcΓÇôenabled servers is currently in public preview. The endpoint allows your hybrid machines to securely connect to Azure by using a private IP address from your virtual network.
## Enable VM insights on machines
-When you enable VM insights on a machine, it installs the Log Analytics agent and Dependency agent, connects it to a workspace, and starts collecting performance data. This allows you to start using performance views and workbooks to analyze trends for a variety of guest operating system metrics, enables the map feature of VM insights for analyzing running processes and dependencies between machines, and collects data required for you to create a variety of alert rules.
-
-You can enable VM insights on individual machines using the same methods for Azure virtual machines and Azure Arc-enabled servers. This includes onboarding individual machines with the Azure portal or Resource Manager templates, or enabling machines at scale using Azure Policy. There is no direct cost for VM insights, but there is a cost for the ingestion and retention of data collected in the Log Analytics workspace.
-
-See [Enable VM insights overview](vminsights-enable-overview.md) for different options to enable VM insights for your machines. See [Enable VM insights by using Azure Policy](vminsights-enable-policy.md) to create a policy that will automatically enable VM insights on any new machines as they're created.
+After you enable VM insights on a machine, it installs the Log Analytics agent and Dependency agent, connects to a workspace, and starts collecting performance data. You can start using performance views and workbooks to analyze trends for a variety of guest operating system metrics, enable the map feature of VM insights for analyzing running processes and dependencies between machines, and collect the data required for you to create a variety of alert rules.
+You can enable VM insights on individual machines by using the same methods for Azure virtual machines and Azure ArcΓÇôenabled servers. These methods include onboarding individual machines with the Azure portal or Azure Resource Manager templates or enabling machines at scale by using Azure Policy. There's no direct cost for VM insights, but there is a cost for the ingestion and retention of data collected in the Log Analytics workspace.
+For different options to enable VM insights for your machines, see [Enable VM insights overview](vminsights-enable-overview.md). To create a policy that automatically enables VM insights on any new machines as they're created, see [Enable VM insights by using Azure Policy](vminsights-enable-policy.md).
## Send guest performance data to Metrics
-The [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) will replace the Log Analytics agent when it fully supports Azure Monitor, Azure Security Center, and Azure Sentinel. Until that time, it can be installed with the Log Analytics agent to send performance data from the guest operating of machines to Azure Monitor Metrics. This allows you to evaluate this data with metrics explorer and use metric alerts.
+The [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) replaces the Log Analytics agent when it fully supports Azure Monitor, Azure Security Center, and Azure Sentinel. Until that time, it can be installed with the Log Analytics agent to send performance data from the guest operating system of machines to Azure Monitor Metrics. This configuration allows you to evaluate this data with metrics explorer and use metric alerts.
-Azure Monitor agent requires at least one Data Collection Rule (DCR) that defines which data it should collect and where it should send that data. A single DCR can be used by any machines in the same resource group.
+The Azure Monitor agent requires at least one data collection rule (DCR) that defines which data it should collect and where it should send that data. A single DCR can be used by any machines in the same resource group.
-Create a single DCR for each resource group with machines to monitor using the following data source. Be careful to not send data to Logs since this would be redundant with the data already being collected by Log Analytics agent.
+Create a single DCR for each resource group with machines to monitor by using the following data source:
-Data source type: Performance Counters
-Destination: Azure Monitor Metrics
+- **Data source type**: Performance Counters
+- **Destination**: Azure Monitor Metrics
-You can install Azure Monitor agent on individual machines using the same methods for Azure virtual machines and Azure Arc-enabled servers. This includes onboarding individual machines with the Azure portal or Resource Manager templates, or enabling machines at scale using Azure Policy. For hybrid machines that can't use Arc-enabled servers, you will need to install the agent manually.
+Be careful to not send data to Logs because it would be redundant with the data already being collected by the Log Analytics agent.
-See [Create rule and association in Azure portal](../agents/data-collection-rule-azure-monitor-agent.md) to create a DCR and deploy the Azure Monitor agent to one or more agents using the Azure portal. Other installation methods are described at [Install the Azure Monitor agent](../agents/azure-monitor-agent-install.md). See [Deploy Azure Monitor at scale using Azure Policy](../deploy-scale.md#azure-monitor-agent) to create a policy that will automatically deploy the agent and DCR to any new machines as they're created.
+You can install an Azure Monitor agent on individual machines by using the same methods for Azure virtual machines and Azure ArcΓÇôenabled servers. These methods include onboarding individual machines with the Azure portal or Resource Manager templates or enabling machines at scale by using Azure Policy. For hybrid machines that can't use Azure ArcΓÇôenabled servers, install the agent manually.
+To create a DCR and deploy the Azure Monitor agent to one or more agents by using the Azure portal, see [Create rule and association in the Azure portal](../agents/data-collection-rule-azure-monitor-agent.md). Other installation methods are described at [Install the Azure Monitor agent](../agents/azure-monitor-agent-install.md). To create a policy that automatically deploys the agent and DCR to any new machines as they're created, see [Deploy Azure Monitor at scale using Azure Policy](../deploy-scale.md#azure-monitor-agent).
## Next steps
-* [Analyze monitoring data collected for virtual machines.](monitor-virtual-machine-analyze.md)
-* [Create alerts from collected data.](monitor-virtual-machine-alerts.md)
-* [Monitor workloads running on virtual machines.](monitor-virtual-machine-workloads.md)
+* [Analyze monitoring data collected for virtual machines](monitor-virtual-machine-analyze.md)
+* [Create alerts from collected data](monitor-virtual-machine-alerts.md)
+* [Monitor workloads running on virtual machines](monitor-virtual-machine-workloads.md)
azure-monitor Monitor Virtual Machine Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine-security.md
Title: Monitor virtual machines with Azure Monitor - Security
-description: Describes services for monitoring security of virtual machines and how they relate to Azure Monitor.
-
+ Title: 'Monitor virtual machines with Azure Monitor: Security'
+description: Learn about services for monitoring security of virtual machines and how they relate to Azure Monitor.
+
Last updated 06/21/2021
-# Monitor virtual machines with Azure Monitor - Security monitoring
-This article is part of the [Monitoring virtual machines and their workloads in Azure Monitor scenario](monitor-virtual-machine.md). It describes the Azure services for monitoring security for your virtual machines and how they relate to Azure Monitor. Azure Monitor was designed to monitor the availability and performance of your virtual machines and other cloud resources. While the operational data stored in Azure Monitor may be useful for investigating security incidents, other services in Azure were designed to monitor security.
+# Monitor virtual machines with Azure Monitor: Security monitoring
+This article is part of the scenario [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It describes the Azure services for monitoring security for your virtual machines and how they relate to Azure Monitor. Azure Monitor was designed to monitor the availability and performance of your virtual machines and other cloud resources. While the operational data stored in Azure Monitor might be useful for investigating security incidents, other services in Azure were designed to monitor security.
> [!IMPORTANT]
-> The security services have their own cost independent of Azure Monitor. Before configuring these services, refer to their pricing information to determine your appropriate investment in their usage.
+> The security services have their own cost independent of Azure Monitor. Before you configure these services, refer to their pricing information to determine your appropriate investment in their usage.
## Azure services for security monitoring
-Azure Monitor focuses on operational data including Activity Logs, Metrics, and Log Analytics supported sources including Windows Events (excluding security events), performance counters, logs, and Syslog. Security monitoring in Azure is performed by Azure Security Center and Azure Sentinel. These services each have additional cost, so you should determine their value in your environment before implementing them.
+Azure Monitor focuses on operational data like Activity logs, Metrics, and Log Analytics supported sources, including Windows Events (excluding security events), performance counters, logs, and Syslog. Security monitoring in Azure is performed by Azure Security Center and Azure Sentinel. These services each have additional cost, so you should determine their value in your environment before you implement them.
-[Azure Security Center](../../security-center/security-center-introduction.md) collects information about Azure resources and hybrid servers. Though capable of collecting security events, Azure Security Center focuses on collecting inventory data, assessment scan results, and policy audits to highlight vulnerabilities and recommend corrective actions. Noteworthy features include an interactive Network Map, Just-in-Time VM Access, Adaptive Network hardening, and Adaptive Application Controls to block suspicious executables.
+[Azure Security Center](../../security-center/security-center-introduction.md) collects information about Azure resources and hybrid servers. Although Security Center can collect security events, Security Center focuses on collecting inventory data, assessment scan results, and policy audits to highlight vulnerabilities and recommend corrective actions. Noteworthy features include an interactive network map, just-in-time VM access, adaptive network hardening, and adaptive application controls to block suspicious executables.
-[Defender for Servers](../../security-center/azure-defender.md) is the server assessment solution provided by Azure Security Center. Defender for Servers can send Windows Security Events to Log Analytics. Azure Security Center does not rely on Windows Security Events for alerting or analysis. Using this feature allow centralized archival of events for investigation or other purposes.
+[Azure Defender for Servers](../../security-center/azure-defender.md) is the server assessment solution provided by Security Center. Defender for Servers can send Windows Security Events to Log Analytics. Security Center doesn't rely on Windows Security Events for alerting or analysis. Using this feature allows centralized archival of events for investigation or other purposes.
-[Azure Sentinel](../../sentinel/overview.md) is a security information event management (SIEM) and security orchestration automated response (SOAR) solution. Sentinel collects security data from a wide range of Microsoft and 3rd party sources to provide alerting, visualization, and automation. This solution focuses on consolidating as many security logs as possible, including Windows security events. Sentinel can also collect Windows Security Event Logs and commonly shares a Log Analytics workspace with Azure Security Center. Security events can only be collected from Sentinel or Azure Security Center when they share the same workspace. Unlike Azure Security Center, security events are a key component of alerting and analysis in Azure Sentinel.
+[Azure Sentinel](../../sentinel/overview.md) is a security information event management (SIEM) and security orchestration automated response (SOAR) solution. Sentinel collects security data from a wide range of Microsoft and third-party sources to provide alerting, visualization, and automation. This solution focuses on consolidating as many security logs as possible, including Windows Security Events. Azure Sentinel can also collect Windows Security Event Logs and commonly shares a Log Analytics workspace with Security Center. Security events can only be collected from Azure Sentinel or Security Center when they share the same workspace. Unlike Security Center, security events are a key component of alerting and analysis in Azure Sentinel.
-[Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) is an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats. Designed with a primary focus on protecting Windows end-user devices. Defender for Endpoint monitors workstations, servers, tablets, and cellphones with a variety of operating systems for security issues and vulnerabilities. Defender for Endpoint is closely aligned with Microsoft Endpoint Manager to collect data and provide security assessments. Data collection is primarily based on ETW trace logs and is stored in an isolated workspace.
+[Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) is an enterprise endpoint security platform designed to help enterprise networks prevent, detect, investigate, and respond to advanced threats. It was designed with a primary focus on protecting Windows user devices. Defender for Endpoint monitors workstations, servers, tablets, and cellphones with various operating systems for security issues and vulnerabilities. Defender for Endpoint is closely aligned with Microsoft Endpoint Manager to collect data and provide security assessments. Data collection is primarily based on ETW trace logs and is stored in an isolated workspace.
## Integration with Azure Monitor
-The following table lists the integration points for Azure Monitor with the security services. All the services use the same Log Analytics agent, which reduces complexity since there are no additional components being deployed to your virtual machines. Azure Security Center and Azure Sentinel store their data in a Log Analytics workspace so that you can use log queries to correlate data collected by the different services. Or create a custom workbook that combines security data and availability and performance data in a single view.
+The following table lists the integration points for Azure Monitor with the security services. All the services use the same Log Analytics agent, which reduces complexity because there are no other components being deployed to your virtual machines. Security Center and Azure Sentinel store their data in a Log Analytics workspace so that you can use log queries to correlate data collected by the different services. Or you can create a custom workbook that combines security data and availability and performance data in a single view.
| Integration point | Azure Monitor | Azure Security Center | Azure Sentinel | Defender for Endpoint | |:|:|:|:|:|
The following table lists the integration points for Azure Monitor with the secu
## Workspace design considerations
-As described in [Monitor virtual machines with Azure Monitor - Configure monitoring](monitor-virtual-machine-configure.md#create-and-prepare-log-analytics-workspace), Azure Monitor and the security services require a Log Analytics workspace. Depending on your particular requirements, you may choose to share a common workspace or separate your availability and performance data from your security data. See [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md) for complete details on logic that you should consider for designing a workspace configuration.
-## Agent deployment
-You can configure Azure Security Center to automatically deploy the Log Analytics agent to Azure virtual machines. While this may seem redundant with Azure Monitor deploying the same agent, you will most likely want to enable both since they'll each perform their own configuration. For example, if Azure Security Center attempts provision a machine that's already being monitored by Azure Monitor, it will use the agent that's already installed and add the configuration for the Azure Security Center workspace.
+As described in [Monitor virtual machines with Azure Monitor: Configure monitoring](monitor-virtual-machine-configure.md#create-and-prepare-a-log-analytics-workspace), Azure Monitor and the security services require a Log Analytics workspace. Depending on your particular requirements, you might choose to share a common workspace or separate your availability and performance data from your security data. For complete details on logic that you should consider for designing a workspace configuration, see [Designing your Azure Monitor Logs deployment](../logs/design-logs-deployment.md).
+## Agent deployment
+You can configure Security Center to automatically deploy the Log Analytics agent to Azure virtual machines. While this might seem redundant with Azure Monitor deploying the same agent, you'll most likely want to enable both because they'll each perform their own configuration. For example, if Security Center attempts to provision a machine that's already being monitored by Azure Monitor, it will use the agent that's already installed and add the configuration for the Security Center workspace.
## Next steps
-* [Analyze monitoring data collected for virtual machines.](monitor-virtual-machine-analyze.md)
-* [Create alerts from collected data.](monitor-virtual-machine-alerts.md)
+* [Analyze monitoring data collected for virtual machines](monitor-virtual-machine-analyze.md)
+* [Create alerts from collected data](monitor-virtual-machine-alerts.md)
azure-monitor Monitor Virtual Machine Workloads https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine-workloads.md
Title: Monitor virtual machines with Azure Monitor - Workloads
-description: Describes how to monitor the guest workloads of virtual machines in Azure Monitor.
-
+ Title: 'Monitor virtual machines with Azure Monitor: Workloads'
+description: Learn how to monitor the guest workloads of virtual machines in Azure Monitor.
+
Last updated 06/21/2021
-# Monitoring virtual machines with Azure Monitor - Workloads
-This article is part of the [Monitoring virtual machines and their workloads in Azure Monitor scenario](monitor-virtual-machine.md). It describes how to monitor workloads that are running on the guest operating systems of your virtual machines. This includes details on analyzing and alerting on different sources of data on your virtual machines.
--
+# Monitor virtual machines with Azure Monitor: Workloads
+This article is part of the scenario [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It describes how to monitor workloads that are running on the guest operating systems of your virtual machines. This article includes details on analyzing and alerting on different sources of data on your virtual machines.
## Configure additional data collection
-VM insights collects only performance data from the guest operating system of enabled machines. You can enable the collection of additional performance data, events, and other monitoring data from the agent by configuring the Log Analytics workspace. It only needs to be configured once, since any agent connecting to the workspace will automatically download the configuration and immediately start collecting the defined data.
+VM insights collects only performance data from the guest operating system of enabled machines. You can enable the collection of additional performance data, events, and other monitoring data from the agent by configuring the Log Analytics workspace. It's configured only once because any agent that connects to the workspace automatically downloads the configuration and immediately starts collecting the defined data.
-See [Agent data sources in Azure Monitor](../agents/agent-data-sources.md) for a list of the data sources available and details on configuring them.
+For a list of the data sources available and details on how to configure them, see [Agent data sources in Azure Monitor](../agents/agent-data-sources.md).
> [!NOTE]
-> You cannot selectively configure data collection for different machines. All machines connected to the workspace will use the configuration for that workspace.
--
+> You can't selectively configure data collection for different machines. All machines connected to the workspace use the configuration for that workspace.
> [!IMPORTANT]
-> Be careful to only collect the data that you require since there are costs associated with any data collected in your workspace. The data that you collect should only support particular analysis and alerting scenarios
-
+> Be careful to collect only the data that you require. Costs are associated with any data collected in your workspace. The data that you collect should only support particular analysis and alerting scenarios.
## Convert management pack logic
-A significant number of customers implementing Azure Monitor currently monitor their virtual machine workloads using management packs in System Center Operations Manager. There are no migration tools to convert assets from Operations Manager to Azure Monitor since the platforms are fundamentally different. Your migration will instead constitute a standard Azure Monitor implementation while you continue to use Operations Manager. As you customize Azure Monitor to meet your requirements for different applications and components and as it gains more features, then you can start to retire different management packs and agents in Operations Manager.
-
-Rather than attempting to replicate the entire functionality of a management pack, analyze the critical monitoring provided by the management pack and whether you can replicate those monitoring requirements using on the methods described in the previous sections. In many cases, you can configure data collection and alert rules in Azure Monitor that will replicate enough functionality that you can retire a particular management pack. Management packs can often include hundreds and even thousands of rules and monitors.
+A significant number of customers who implement Azure Monitor currently monitor their virtual machine workloads by using management packs in System Center Operations Manager. There are no migration tools to convert assets from Operations Manager to Azure Monitor because the platforms are fundamentally different. Your migration instead constitutes a standard Azure Monitor implementation while you continue to use Operations Manager. As you customize Azure Monitor to meet your requirements for different applications and components and as it gains more features, then you can start to retire different management packs and agents in Operations Manager.
-In most scenarios Operations Manager combines data collection and alerting conditions in the same rule or monitor. In Azure Monitor, you must configure data collection and an alert rule for any alerting scenarios.
+Instead of attempting to replicate the entire functionality of a management pack, analyze the critical monitoring provided by the management pack. Decide whether you can replicate those monitoring requirements by using the methods described in the previous sections. In many cases, you can configure data collection and alert rules in Azure Monitor that replicate enough functionality that you can retire a particular management pack. Management packs can often include hundreds and even thousands of rules and monitors.
-One strategy is to focus on those monitors and rules that have triggered alerts in your environment. Refer to [existing reports available in Operations Manager](/system-center/scom/manage-reports-installed-during-setup) such as **Alerts** and **Most Common Alerts** which can help you identify alerts over time. You can also run the following query on the Operations Database to evaluate the most common recent alerts.
+In most scenarios, Operations Manager combines data collection and alerting conditions in the same rule or monitor. In Azure Monitor, you must configure data collection and an alert rule for any alerting scenarios.
+One strategy is to focus on those monitors and rules that triggered alerts in your environment. Refer to [existing reports available in Operations Manager](/system-center/scom/manage-reports-installed-during-setup), such as **Alerts** and **Most Common Alerts**, which can help you identify alerts over time. You can also run the following query on the Operations Database to evaluate the most common recent alerts.
```sql select AlertName, COUNT(AlertName) as 'Total Alerts' from
group by AlertName
order by 'Total Alerts' DESC ```
-Evaluate the output to identify specific alerts for migration. Ignore any alerts that have been tuned out or known to be problematic. Review your management packs to identify any additional critical alerts of interest that have never fired.
--
+Evaluate the output to identify specific alerts for migration. Ignore any alerts that were tuned out or are known to be problematic. Review your management packs to identify any critical alerts of interest that never fired.
## Windows or Syslog event
-This is a common monitoring scenario with the operating system and applications writing to the Windows events or Syslog. Create an alert as soon as a single event is found or wait for a series of matching events within a particular time window.
-
-To collect these events, configure Log Analytics workspace to collect [Windows events](../agents/data-sources-windows-events.md) or [Syslog events](../agents/data-sources-windows-events.md). There is a cost for the ingestion and retention of this data in the workspace.
+In this common monitoring scenario, the operating system and applications write to the Windows events or Syslog. Create an alert as soon as a single event is found. Or you can wait for a series of matching events within a particular time window.
-Windows events are stored in the [Event](/azure/azure-monitor/reference/tables/event) table and Syslog events in the [Syslog](/azure/azure-monitor/reference/tables/syslog) table in the Log Analytics workspace.
+To collect these events, configure a Log Analytics workspace to collect [Windows events](../agents/data-sources-windows-events.md) or [Syslog events](../agents/data-sources-windows-events.md). There's a cost for the ingestion and retention of this data in the workspace.
+Windows events are stored in the [Event](/azure/azure-monitor/reference/tables/event) table and Syslog events are stored in the [Syslog](/azure/azure-monitor/reference/tables/syslog) table in the Log Analytics workspace.
### Sample log queries
-**Count the number of events by computer event log, and event type**
-
-```kusto
-Event
-| summarize count() by Computer, EventLog, EventLevelName
-| sort by Computer, EventLog, EventLevelName
-```
-
-**Count the number of events by computer event log, and event ID**
+- **Count the number of events by computer event log and event type.**
-```kusto
-Event
-| summarize count() by Computer, EventLog, EventLevelName
-| sort by Computer, EventLog, EventLevelName
-```
+ ```kusto
+ Event
+ | summarize count() by Computer, EventLog, EventLevelName
+ | sort by Computer, EventLog, EventLevelName
+ ```
+- **Count the number of events by computer event log and event ID.**
+
+ ```kusto
+ Event
+ | summarize count() by Computer, EventLog, EventLevelName
+ | sort by Computer, EventLog, EventLevelName
+ ```
### Sample alert rules The following sample creates an alert when a specific Windows event is created. It uses a metric measurement alert rule to create a separate alert for each computer.
-**Alert on a specific Windows event**
-This example shows an event in the Application log. Specify a threshold of 0 and consecutive breaches greater than 0.
+- **Create an alert rule on a specific Windows event.**
-```kusto
-Event
-| where EventLog == "Application"
-| where EventID == 123
-| summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
-```
+ This example shows an event in the Application log. Specify a threshold of 0 and consecutive breaches greater than 0.
-**Alert on Syslog events with a particular severity**
-The example shows error authorization events. Specify a threshold of 0 and consecutive breaches greater than 0.
+ ```kusto
+ Event
+ | where EventLog == "Application"
+ | where EventID == 123
+ | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
+ ```
-```kusto
-Syslog
-| where Facility == "auth"
-| where SeverityLevel == "err"
-| summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
-```
+- **Create an alert rule on Syslog events with a particular severity.**
+ The following example shows error authorization events. Specify a threshold of 0 and consecutive breaches greater than 0.
+
+ ```kusto
+ Syslog
+ | where Facility == "auth"
+ | where SeverityLevel == "err"
+ | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
+ ```
## Custom performance counters
-You may need performance counters created by applications or the guest operating system that aren't collected by VM insights. Configure the Log Analytics workspace to collect this [performance data](../agents/data-sources-windows-events.md). There is a cost for the ingestion and retention of this data in the workspace. Be careful to not collect performance data that's already being collected by VM insights.
+You might need performance counters created by applications or the guest operating system that aren't collected by VM insights. Configure the Log Analytics workspace to collect this [performance data](../agents/data-sources-windows-events.md). There's a cost for the ingestion and retention of this data in the workspace. Be careful to not collect performance data that's already being collected by VM insights.
-Performance data configured by the workspace are stored in the [Perf](/azure/azure-monitor/reference/tables/perf) table. This has a different structure than the [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) table used by VM insights.
+Performance data configured by the workspace is stored in the [Perf](/azure/azure-monitor/reference/tables/perf) table. This table has a different structure than the [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) table used by VM insights.
### Sample log queries
-See [Log queries with Performance records](../agents/data-sources-performance-counters.md#log-queries-with-performance-records) for example of log queries using custom performance counters.
+For examples of log queries that use custom performance counters, see [Log queries with Performance records](../agents/data-sources-performance-counters.md#log-queries-with-performance-records).
### Sample alerts
-**Alert on maximum value of a counter**
+- **Create an alert on the maximum value of a counter.**
+
+ ```kusto
+ Perf
+ | where CounterName == "My Counter"
+ | summarize AggregatedValue = max(CounterValue) by Computer
+ ```
-```kusto
-Perf
-| where CounterName == "My Counter"
-| summarize AggregatedValue = max(CounterValue) by Computer
-```
+- **Create an alert on the average value of a counter.**
-**Alert on average value of a counter**
-
-```kusto
-Perf
-| where CounterName == "My Counter"
-| summarize AggregatedValue = avg(CounterValue) by Computer
-```
+ ```kusto
+ Perf
+ | where CounterName == "My Counter"
+ | summarize AggregatedValue = avg(CounterValue) by Computer
+ ```
## Text logs
-Some applications will write events written to a text log stored on the virtual machine. Define a [custom log](../agents/data-sources-custom-logs.md) in the Log Analytics workspace to collect these events. You define the location of the text log and its detailed configuration. There is a cost for the ingestion and retention of this data in the workspace.
+Some applications write events written to a text log stored on the virtual machine. Define a [custom log](../agents/data-sources-custom-logs.md) in the Log Analytics workspace to collect these events. You define the location of the text log and its detailed configuration. There's a cost for the ingestion and retention of this data in the workspace.
-Events from the text log are stored in a table named similar to **MyTable_CL**. You define the name and structure of the log when you configure it.
+Events from the text log are stored in a table with a name similar to **MyTable_CL**. You define the name and structure of the log when you configure it.
### Sample log queries The column names used here are for example only. You define the column names for your particular log when you define it. The column names for your log will most likely be different.
-**Count the number of events by code**
-
-```kusto
-MyApp_CL
-| summarize count() by code
-```
+- **Count the number of events by code.**
+
+ ```kusto
+ MyApp_CL
+ | summarize count() by code
+ ```
### Sample alert rule
-**Alert on any error event**
-
-```kusto
-MyApp_CL
-| where status == "Error"
-| summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
-```
+- **Create an alert rule on any error event.**
+
+ ```kusto
+ MyApp_CL
+ | where status == "Error"
+ | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
+ ```
## IIS logs
-IIS running on Windows machines will write logs to a text file. Configure Log Analytics workspace to collect [IIS logs](../agents/data-sources-iis-logs.md). There is a cost for the ingestion and retention of this data in the workspace.
+IIS running on Windows machines writes logs to a text file. Configure a Log Analytics workspace to collect [IIS logs](../agents/data-sources-iis-logs.md). There's a cost for the ingestion and retention of this data in the workspace.
Records from the IIS log are stored in the [W3CIISLog](/azure/azure-monitor/reference/tables/w3ciislog) table in the Log Analytics workspace. ### Sample log queries
+- **Count the IIS log entries by URL for the host www.contoso.com.**
+
+ ```kusto
+ W3CIISLog
+ | where csHost=="www.contoso.com"
+ | summarize count() by csUriStem
+ ```
-**Count of IIS log entries by URL for the host www.contoso.com**
+- **Review the total bytes received by each IIS machine.**
-```kusto
-W3CIISLog
-| where csHost=="www.contoso.com"
-| summarize count() by csUriStem
-```
-
-**Total bytes received by each IIS machine**
-
-```kusto
-W3CIISLog
-| summarize sum(csBytes) by Computer
-```
+ ```kusto
+ W3CIISLog
+ | summarize sum(csBytes) by Computer
+ ```
### Sample alert rule
-**Alert on any record with a return status of 500**
-
-```kusto
-W3CIISLog
-| where scStatus==500
-| summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
-```
+- **Create an alert rule on any record with a return status of 500.**
+
+ ```kusto
+ W3CIISLog
+ | where scStatus==500
+ | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
+ ```
## Service or daemon
-To monitor the status of a Windows service or Linux daemon, enable the [Change Tracking and Inventory](../../automation/change-tracking/overview.md) solution in [Azure Automation](../../automation/automation-intro.md). Azure Monitor has no ability to monitor the status a service or daemon, There are some possible methods such as looking for events in the Windows event log, but this is unreliable. You could also look for the process associated with the service running on the machine from the [VMProcess](/azure/azure-monitor/reference/tables/vmprocess) table, but this only updated every hour which is not typically sufficient for alerting.
+To monitor the status of a Windows service or Linux daemon, enable the [Change Tracking and Inventory](../../automation/change-tracking/overview.md) solution in [Azure Automation](../../automation/automation-intro.md).
+Azure Monitor has no ability to monitor the status of a service or daemon. There are some possible methods to use, such as looking for events in the Windows event log, but this method is unreliable. You can also look for the process associated with the service running on the machine from the [VMProcess](/azure/azure-monitor/reference/tables/vmprocess) table. This table only updates every hour, which isn't typically sufficient for alerting.
> [!NOTE]
-> The Change Tracking and Analysis solution is different the [Change Analysis](vminsights-change-analysis.md) feature in VM insights. This feature is in public preview and not yet included in this scenario.
+> The Change Tracking and Analysis solution is different from the [Change Analysis](vminsights-change-analysis.md) feature in VM insights. This feature is in public preview and not yet included in this scenario.
-See [Enable Change Tracking and Inventory](../../automation/change-tracking/overview.md#enable-change-tracking-and-inventory) for different options to enable the Change Tracking solution on your virtual machines. This includes methods to configure virtual machines at scale. You will have to [create an Azure Automation account](../../automation/automation-quickstart-create-account.md) to support the solution.
+For different options to enable the Change Tracking solution on your virtual machines, see [Enable Change Tracking and Inventory](../../automation/change-tracking/overview.md#enable-change-tracking-and-inventory). This solution includes methods to configure virtual machines at scale. You'll have to [create an Azure Automation account](../../automation/automation-quickstart-create-account.md) to support the solution.
When you enable Change Tracking and Inventory, two new tables are created in your Log Analytics workspace. Use these tables for log query alert rules. | Table | Description | |:|:|
-| [ConfigurationChange](/azure/azure-monitor/reference/tables/configurationdata) | Changes to in-guest configuration data. |
-| [ConfigurationData](/azure/azure-monitor/reference/tables/configurationdata) | Last reported state for in-guest configuration data. |
+| [ConfigurationChange](/azure/azure-monitor/reference/tables/configurationdata) | Changes to in-guest configuration data |
+| [ConfigurationData](/azure/azure-monitor/reference/tables/configurationdata) | Last reported state for in-guest configuration data |
### Sample log queries
-**List all services and daemons that have recently started**
-
-```kusto
-ConfigurationChange
-| where ConfigChangeType == "Daemons" or ConfigChangeType == "WindowsServices"
-| where SvcState == "Running"
-| sort by Computer, SvcName
-```
-
+- **List all services and daemons that have recently started.**
+
+ ```kusto
+ ConfigurationChange
+ | where ConfigChangeType == "Daemons" or ConfigChangeType == "WindowsServices"
+ | where SvcState == "Running"
+ | sort by Computer, SvcName
+ ```
### Alert rule samples -
-**Alert when a specific service stops**
--
-```kusto
-ConfigurationData
-| where SvcName == "W3SVC"
-| where SvcState == "Stopped"
-| where ConfigDataType == "WindowsServices"
-| where SvcStartupType == "Auto"
-| summarize AggregatedValue = count() by Computer, SvcName, SvcDisplayName, SvcState, bin(TimeGenerated, 15m)
-```
-
-**Alert when one of a set of services stops**
--
-```kusto
-let services = dynamic(["omskd","cshost","schedule","wuauserv","heathservice","efs","wsusservice","SrmSvc","CertSvc","wmsvc","vpxd","winmgmt","netman","smsexec","w3svc","sms_site_vss_writer","ccmexe","spooler","eventsystem","netlogon","kdc","ntds","lsmserv","gpsvc","dns","dfsr","dfs","dhcp","DNSCache","dmserver","messenger","w32time","plugplay","rpcss","lanmanserver","lmhosts","eventlog","lanmanworkstation","wnirm","mpssvc","dhcpserver","VSS","ClusSvc","MSExchangeTransport","MSExchangeIS"]);
-ConfigurationData
-| where ConfigDataType == "WindowsServices"
-| where SvcStartupType == "Auto"
-| where SvcName in (services)
-| where SvcState == "Stopped"
-| project TimeGenerated, Computer, SvcName, SvcDisplayName, SvcState
-| summarize AggregatedValue = count() by Computer, SvcName, SvcDisplayName, SvcState, bin(TimeGenerated, 15m)
-```
--
+- **Create an alert rule based on when a specific service stops.**
+
+
+ ```kusto
+ ConfigurationData
+ | where SvcName == "W3SVC"
+ | where SvcState == "Stopped"
+ | where ConfigDataType == "WindowsServices"
+ | where SvcStartupType == "Auto"
+ | summarize AggregatedValue = count() by Computer, SvcName, SvcDisplayName, SvcState, bin(TimeGenerated, 15m)
+ ```
+
+- **Create an alert rule based on when one of a set of services stops.**
+
+ ```kusto
+ let services = dynamic(["omskd","cshost","schedule","wuauserv","heathservice","efs","wsusservice","SrmSvc","CertSvc","wmsvc","vpxd","winmgmt","netman","smsexec","w3svc","sms_site_vss_writer","ccmexe","spooler","eventsystem","netlogon","kdc","ntds","lsmserv","gpsvc","dns","dfsr","dfs","dhcp","DNSCache","dmserver","messenger","w32time","plugplay","rpcss","lanmanserver","lmhosts","eventlog","lanmanworkstation","wnirm","mpssvc","dhcpserver","VSS","ClusSvc","MSExchangeTransport","MSExchangeIS"]);
+ ConfigurationData
+ | where ConfigDataType == "WindowsServices"
+ | where SvcStartupType == "Auto"
+ | where SvcName in (services)
+ | where SvcState == "Stopped"
+ | project TimeGenerated, Computer, SvcName, SvcDisplayName, SvcState
+ | summarize AggregatedValue = count() by Computer, SvcName, SvcDisplayName, SvcState, bin(TimeGenerated, 15m)
+ ```
## Port monitoring
-Port monitoring verifies that a machine is listening on a particular port. There are two potential strategies for port monitoring described below.
+Port monitoring verifies that a machine is listening on a particular port. Two potential strategies for port monitoring are described here.
### Dependency agent tables
-Use the [VMConnection](/azure/azure-monitor/reference/tables/vmconnection) and [VMBoundPort](/azure/azure-monitor/reference/tables/vmboundport) to analyze ports and connections on the machine. The VMBoundPort table is updated every minute with each process running on the computer and the port is listening on. You could create a log query alert similar to the missing heartbeat alert to find processes that have stopped or to alert when the machine isnΓÇÖt listening on a particular port.
+Use [VMConnection](/azure/azure-monitor/reference/tables/vmconnection) and [VMBoundPort](/azure/azure-monitor/reference/tables/vmboundport) to analyze connections and ports on the machine. The VMBoundPort table is updated every minute with each process running on the computer and the port it's listening on. You can create a log query alert similar to the missing heartbeat alert to find processes that have stopped or to alert when the machine isn't listening on a particular port.
### Sample log queries
-**Review the count of ports open on your VMs, which is useful when assessing which VMs configuration and security vulnerabilities.**
+- **Review the count of ports open on your VMs, which is useful for assessing which VMs have configuration and security vulnerabilities.**
-```kusto
-VMBoundPort
-| where Ip != "127.0.0.1"
-| summarize by Computer, Machine, Port, Protocol
-| summarize OpenPorts=count() by Computer, Machine
-| order by OpenPorts desc
-```
+ ```kusto
+ VMBoundPort
+ | where Ip != "127.0.0.1"
+ | summarize by Computer, Machine, Port, Protocol
+ | summarize OpenPorts=count() by Computer, Machine
+ | order by OpenPorts desc
+ ```
-**List the bound ports on your VMs, which is useful when assessing which VMs configuration and security vulnerabilities.**
+- **List the bound ports on your VMs, which is useful for assessing which VMs have configuration and security vulnerabilities.**
-```kusto
-VMBoundPort
-| distinct Computer, Port, ProcessName
-```
+ ```kusto
+ VMBoundPort
+ | distinct Computer, Port, ProcessName
+ ```
-**Analyze network activity by port to determine how your application or service is configured.**
+- **Analyze network activity by port to determine how your application or service is configured.**
-```kusto
-VMBoundPort
-| where Ip != "127.0.0.1"
-| summarize BytesSent=sum(BytesSent), BytesReceived=sum(BytesReceived), LinksEstablished=sum(LinksEstablished), LinksTerminated=sum(LinksTerminated), arg_max(TimeGenerated, LinksLive) by Machine, Computer, ProcessName, Ip, Port, IsWildcardBind
-| project-away TimeGenerated
-| order by Machine, Computer, Port, Ip, ProcessName
-```
+ ```kusto
+ VMBoundPort
+ | where Ip != "127.0.0.1"
+ | summarize BytesSent=sum(BytesSent), BytesReceived=sum(BytesReceived), LinksEstablished=sum(LinksEstablished), LinksTerminated=sum(LinksTerminated), arg_max(TimeGenerated, LinksLive) by Machine, Computer, ProcessName, Ip, Port, IsWildcardBind
+ | project-away TimeGenerated
+ | order by Machine, Computer, Port, Ip, ProcessName
+ ```
-**Bytes sent and received trends for your VMs.**
+- **Review bytes sent and received trends for your VMs.**
-```kusto
-VMConnection
-| summarize sum(BytesSent), sum(BytesReceived) by bin(TimeGenerated,1hr), Computer
-| order by Computer desc
-| render timechart
-```
+ ```kusto
+ VMConnection
+ | summarize sum(BytesSent), sum(BytesReceived) by bin(TimeGenerated,1hr), Computer
+ | order by Computer desc
+ | render timechart
+ ```
-**Connection failures over time, to determine if the failure rate is stable or changing.**
+- **Use connection failures over time to determine if the failure rate is stable or changing.**
-```kusto
-VMConnection
-| where Computer == <replace this with a computer name, e.g. ΓÇÿacme-demoΓÇÖ>
-| extend bythehour = datetime_part("hour", TimeGenerated)
-| project bythehour, LinksFailed
-| summarize failCount = count() by bythehour
-| sort by bythehour asc
-| render timechart
-```
+ ```kusto
+ VMConnection
+ | where Computer == <replace this with a computer name, e.g. ΓÇÿacme-demoΓÇÖ>
+ | extend bythehour = datetime_part("hour", TimeGenerated)
+ | project bythehour, LinksFailed
+ | summarize failCount = count() by bythehour
+ | sort by bythehour asc
+ | render timechart
+ ```
-**Link status trends, to analyze the behavior and connection status of a machine.**
+- **Link status trends to analyze the behavior and connection status of a machine.**
-```kusto
-VMConnection
-| where Computer == <replace this with a computer name, e.g. ΓÇÿacme-demoΓÇÖ>
-| summarize dcount(LinksEstablished), dcount(LinksLive), dcount(LinksFailed), dcount(LinksTerminated) by bin(TimeGenerated, 1h)
-| render timechart
-```
+ ```kusto
+ VMConnection
+ | where Computer == <replace this with a computer name, e.g. ΓÇÿacme-demoΓÇÖ>
+ | summarize dcount(LinksEstablished), dcount(LinksLive), dcount(LinksFailed), dcount(LinksTerminated) by bin(TimeGenerated, 1h)
+ | render timechart
+ ```
### Connection Manager
-The [Connection Monitor](../../network-watcher/connection-monitor-overview.md) feature of [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) can be used to test connections to a port on a virtual machine. This verifies that the machine is listening on the port and that itΓÇÖs accessible on the network.
-Connection Manager requires the Network Watcher extension on client machine initiating the test. It does not need to be installed on the machine being tested. See [Tutorial - Monitor network communication using the Azure portal](../../network-watcher/connection-monitor.md) for details.
+The [Connection Monitor](../../network-watcher/connection-monitor-overview.md) feature of [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) is used to test connections to a port on a virtual machine. A test verifies that the machine is listening on the port and that it's accessible on the network.
+Connection Manager requires the Network Watcher extension on the client machine initiating the test. It doesn't need to be installed on the machine being tested. For details, see [Tutorial - Monitor network communication using the Azure portal](../../network-watcher/connection-monitor.md).
-There is an additional cost for Connection Manager. See [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/) for details.
+There's an extra cost for Connection Manager. For details, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
-## Run a process on local machine
-Monitoring of some workloads requires a local process, for example a PowerShell script running on the local machine to connect to an application and collect and/or process data. You can use [Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md) which is part of [Azure Automation](../../automation/automation-intro.md) to run a local PowerShell script. There is no direct charge for hybrid runbook worker, but there is a cost for each runbook that it uses.
+## Run a process on a local machine
+Monitoring of some workloads requires a local process. An example is a PowerShell script that runs on the local machine to connect to an application and collect or process data. You can use [Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md), which is part of [Azure Automation](../../automation/automation-intro.md), to run a local PowerShell script. There's no direct charge for Hybrid Runbook Worker, but there is a cost for each runbook that it uses.
-The runbook can access any resources on the local machine to gather required data, but it canΓÇÖt send data directly to Azure Monitor or create an alert. To create an alert, have the runbook write an entry to a custom log and then configure that log to be collected by Azure Monitor. Create a log query alert rule that fires on that log entry.
+The runbook can access any resources on the local machine to gather required data. It can't send data directly to Azure Monitor or create an alert. To create an alert, have the runbook write an entry to a custom log and then configure that log to be collected by Azure Monitor. Create a log query alert rule that fires on that log entry.
## Synthetic transactions
-A synthetic transaction connects to an application or service running on a machine, simulating a user connection or actual user traffic. If the application is available, then you can assume that the machine is running properly. [Application insights](../app/app-insights-overview.md) in Azure Monitor provides this functionality. This only works for applications that are accessible from the internet. For internal applications, you must open a firewall to allow access from specific Microsoft URLs performing the test, or use an alternate monitoring solution such as System Center Operations Manager.
+A synthetic transaction connects to an application or service running on a machine to simulate a user connection or actual user traffic. If the application is available, you can assume that the machine is running properly. [Application insights](../app/app-insights-overview.md) in Azure Monitor provides this functionality. It only works for applications that are accessible from the internet. For internal applications, you must open a firewall to allow access from specific Microsoft URLs performing the test. Or you can use an alternate monitoring solution, such as System Center Operations Manager.
|Method | Description | |:|:|
-| [URL test](../app/monitor-web-app-availability.md) | Ensures that HTTP is available and returning a web page. |
-| [Multistep test](../app/availability-multistep.md) | Simulates a user session. |
-
+| [URL test](../app/monitor-web-app-availability.md) | Ensures that HTTP is available and returning a web page |
+| [Multistep test](../app/availability-multistep.md) | Simulates a user session |
## SQL Server Use [SQL insights](../insights/sql-insights-overview.md) to monitor SQL Server running on your virtual machines. -- ## Next steps
-* [Learn how to analyze data in Azure Monitor logs using log queries.](../logs/get-started-queries.md)
-* [Learn about alerts using metrics and logs in Azure Monitor.](../alerts/alerts-overview.md)
+* [Learn how to analyze data in Azure Monitor logs using log queries](../logs/get-started-queries.md)
+* [Learn about alerts using metrics and logs in Azure Monitor](../alerts/alerts-overview.md)
azure-monitor Monitor Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine.md
Title: Monitor virtual machines with Azure Monitor
-description: Describes how to use Azure Monitor monitor the health and performance of virtual machines and the workloads.
-
+description: Learn how to use Azure Monitor to monitor the health and performance of virtual machines and their workloads.
+
Last updated 06/02/2021
-# Monitoring virtual machines with Azure Monitor
-This scenario describes how to use Azure Monitor monitor the health and performance of virtual machines and the workloads. It includes collection of telemetry critical for monitoring, analysis and visualization of collected data to identify trends, and how to configure alerting to be proactively notified of critical issues.
+# Monitor virtual machines with Azure Monitor
+This scenario describes how to use Azure Monitor to monitor the health and performance of virtual machines and their workloads. It includes collection of telemetry critical for monitoring, analysis and visualization of collected data to identify trends, and how to configure alerting to be proactively notified of critical issues.
-This article introduces the scenario, provides general concepts for monitoring virtual machines in Azure Monitor. If you want to jump right into a specific area then please refer to the other articles that are part of this scenario described in the following table.
+This article introduces the scenario and provides general concepts for monitoring virtual machines in Azure Monitor. If you want to jump right into a specific area, see one of the other articles that are part of this scenario described in the following table.
| Article | Description | |:|:|
-| [Enable monitoring](monitor-virtual-machine-configure.md) | Configuration of Azure Monitor required to monitor virtual machines. This includes enabling VM insights and enabling each virtual machine for monitoring. |
+| [Enable monitoring](monitor-virtual-machine-configure.md) | Configure Azure Monitor to monitor virtual machines, which includes enabling VM insights and enabling each virtual machine for monitoring. |
| [Analyze](monitor-virtual-machine-analyze.md) | Analyze monitoring data collected by Azure Monitor from virtual machines and their guest operating systems and applications to identify trends and critical information. | | [Alerts](monitor-virtual-machine-alerts.md) | Create alerts to proactively identify critical issues in your monitoring data. |
-| [Monitor security](monitor-virtual-machine-security.md) | Describes Azure services for monitoring security of virtual machines. |
+| [Monitor security](monitor-virtual-machine-security.md) | Discover Azure services for monitoring security of virtual machines. |
| [Monitor workloads](monitor-virtual-machine-workloads.md) | Monitor applications and other workloads running on your virtual machines. | > [!IMPORTANT]
-> This scenario does not include features that are not generally available. This includes features in public preview such as [virtual machine guest health](vminsights-health-overview.md) that have the potential to significantly modify the recommendations made here. The scenario will be updated as preview features move into general availability.
-
+> This scenario doesn't include features that aren't generally available. Features in public preview such as [virtual machine guest health](vminsights-health-overview.md) have the potential to significantly modify the recommendations made here. The scenario will be updated as preview features move into general availability.
## Types of machines
-This scenario includes monitoring of the following type of machines using Azure Monitor. Many of the processes described here are the same regardless of the type of machine. Considerations for different types of machines are clearly identified where appropriate.
+This scenario includes monitoring of the following types of machines using Azure Monitor. Many of the processes described here are the same regardless of the type of machine. Considerations for different types of machines are clearly identified where appropriate. The types of machines include:
-- Azure virtual machines-- Azure virtual machine scale sets-- Hybrid machines which are virtual machines running in other clouds, with a managed service provider, or on-premises. They also include physical machines running on-premises.
+- Azure virtual machines.
+- Azure virtual machine scale sets.
+- Hybrid machines, which are virtual machines running in other clouds, with a managed service provider, or on-premises. They also include physical machines running on-premises.
## Layers of monitoring There are fundamentally four layers to a virtual machine that require monitoring. Each layer has a distinct set of telemetry and monitoring requirements. - | Layer | Description | |:|:|
-| Virtual machine host | This is the host virtual machine in Azure. Azure Monitor has no access to the host in other clouds but must rely on information collected from the guest operating system. The host can be useful for tracking activity such as configuration changes, but typically isn't used for significant alerting. |
-| Guest operating system | Operating system running on the virtual machine which is some version of either Windows or Linux. A significant amount of monitoring data is available from the guest operating system such as performance data and events. VM insights in Azure Monitor provides a significant amount of logic for monitoring the health and performance of the guest operating system. |
-| Workloads | Workloads running in the guest operating system that support your business applications. Azure Monitor provides predefined monitoring for some workloads, you typically need to configure data collection and alerting for other workloads using monitoring data that they generate. |
-| Application | The business application that depends on your virtual machines can be monitored using [Application Insights](../app/app-insights-overview.md).
+| Virtual machine host | The host virtual machine in Azure. Azure Monitor has no access to the host in other clouds but must rely on information collected from the guest operating system. The host can be useful for tracking activity such as configuration changes, but typically isn't used for significant alerting. |
+| Guest operating system | The operating system running on the virtual machine, which is some version of either Windows or Linux. A significant amount of monitoring data is available from the guest operating system, such as performance data and events. VM insights in Azure Monitor provides a significant amount of logic for monitoring the health and performance of the guest operating system. |
+| Workloads | Workloads running in the guest operating system that support your business applications. Azure Monitor provides predefined monitoring for some workloads. You typically need to configure data collection and alerting for other workloads by using monitoring data that they generate. |
+| Application | The business application that depends on your virtual machines can be monitored by using [Application Insights](../app/app-insights-overview.md).
- ## VM insights
-This scenario focuses on [VM insights](../vm/vminsights-overview.md), which is the primary feature in Azure Monitor for monitoring virtual machines, providing the following features:
+This scenario focuses on [VM insights](../vm/vminsights-overview.md), which is the primary feature in Azure Monitor for monitoring virtual machines. VM insights provides the following features:
- Simplified onboarding of agents to enable monitoring of a virtual machine guest operating system and workloads. -- Pre-defined trending performance charts and workbooks that allow you to analyze core performance metrics from the virtual machine's guest operating system.
+- Predefined trending performance charts and workbooks that you can use to analyze core performance metrics from the virtual machine's guest operating system.
- Dependency map that displays processes running on each virtual machine and the interconnected components with other machines and external sources. - ## Agents
-Any monitoring tool such as Azure Monitor requires an agent installed on a machine to collect data from its guest operating system. Azure Monitor currently has multiple agents that collect different data, send data to different locations, and support different features. VM insights manages the deployment and configuration of the agents that most customers will use, but you should be aware of the different agents that are described in the following table in case you require the particular scenarios that they support. See [Overview of Azure Monitor agents](../agents/agents-overview.md) for a detailed description and comparison of the different agents.
+Any monitoring tool, such as Azure Monitor, requires an agent installed on a machine to collect data from its guest operating system. Azure Monitor currently has multiple agents that collect different data, send data to different locations, and support different features. VM insights manages the deployment and configuration of the agents that most customers will use. Different agents are described in the following table in case you require the particular scenarios that they support. For a detailed description and comparison of the different agents, see [Overview of Azure Monitor agents](../agents/agents-overview.md).
> [!NOTE]
-> When the Azure Monitor agent fully supports VM insights, Azure Security Center, and Azure Sentinel, then it will completely replace the Log Analytics agent, diagnostic extension, and Telegraf agent.
--- [Azure Monitor agent](../agents/agents-overview.md#log-analytics-agent) - Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Metrics and Logs. When it fully supports VM insights, Azure Security Center, and Azure Sentinel, then it will completely replace the Log Analytics agent and diagnostic extension.-- [Log Analytics agent](../agents/agents-overview.md#log-analytics-agent) - Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Logs. Supports VM insights and monitoring solutions. This is the same agent used for System Center Operations Manager.-- [Dependency agent](../agents/agents-overview.md#dependency-agent) - Collects data about the processes running on the virtual machine and their dependencies. Relies on the Log Analytics agent to transmit data into Azure and supports VM insights, Service Map, and Wire Data 2.0 solutions.-- [Azure Diagnostic extension](../agents/agents-overview.md#azure-diagnostics-extension) - Available for Azure Monitor virtual machines only. Can send data to Azure Event Hubs and Azure Storage.--
+> When the Azure Monitor agent fully supports VM insights, Azure Security Center, and Azure Sentinel, it will completely replace the Log Analytics agent, diagnostic extension, and Telegraf agent.
+- [Azure Monitor agent](../agents/agents-overview.md#log-analytics-agent): Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Metrics and Logs. When it fully supports VM insights, Azure Security Center, and Azure Sentinel, then it will completely replace the Log Analytics agent and diagnostic extension.
+- [Log Analytics agent](../agents/agents-overview.md#log-analytics-agent): Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Logs. Supports VM insights and monitoring solutions. This agent is the same agent used for System Center Operations Manager.
+- [Dependency agent](../agents/agents-overview.md#dependency-agent): Collects data about the processes running on the virtual machine and their dependencies. Relies on the Log Analytics agent to transmit data into Azure and supports VM insights, Service Map, and Wire Data 2.0 solutions.
+- [Azure Diagnostic extension](../agents/agents-overview.md#azure-diagnostics-extension): Available for Azure Monitor virtual machines only. Can send data to Azure Event Hubs and Azure Storage.
## Next steps
-* [Analyze monitoring data collected for virtual machines.](monitor-virtual-machine-analyze.md)
+[Analyze monitoring data collected for virtual machines](monitor-virtual-machine-analyze.md)
azure-sql Migrate To Database From Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/migrate-to-database-from-sql-server.md
In both cases, you need to ensure that the source database is compatible with Az
Use this method to migrate to a single or a pooled database if you can afford some downtime or you're performing a test migration of a production database for later migration. For a tutorial, see [Migrate a SQL Server database](../../dms/tutorial-sql-server-to-azure-sql.md).
-The following list contains the general workflow for a SQL Server database migration of a single or a pooled database using this method. For migration to SQL Managed Instance, see [Migration to SQL Managed Instance](../managed-instance/migrate-to-instance-from-sql-server.md).
+The following list contains the general workflow for a SQL Server database migration of a single or a pooled database using this method. For migration to SQL Managed Instance, see [SQL Server to Azure SQL Managed Instance Guide](../migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
![VSSSDT migration diagram](./media/migrate-to-database-from-sql-server/azure-sql-migration-sql-db.png)
azure-sql Troubleshoot Transaction Log Errors Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/troubleshoot-transaction-log-errors-issues.md
Previously updated : 06/02/2021 Last updated : 07/23/2021 # Troubleshooting transaction log errors with Azure SQL Database and Azure SQL Managed Instance
The following values of `log_reuse_wait_desc` in `sys.databases` may indicate th
| log_reuse_wait_desc | Diagnosis | Response required | |--|--|--|
-| **Nothing** | Typical state. There is nothing blocking the log from truncating. | No. |
-| **Checkpoint** | A checkpoint is needed for log truncation. Rare. | No response required unless sustained. If sustained, file a support request with [Azure Support](https://portal.azure.com/#create/Microsoft.Support). |
-| **Log Backup** | A log backup is in progress. | No response required unless sustained. If sustained, file a support request with [Azure Support](https://portal.azure.com/#create/Microsoft.Support). |
-| **Active backup or restore** | A database backup is in progress. | No response required unless sustained. If sustained, file a support request with [Azure Support](https://portal.azure.com/#create/Microsoft.Support). |
-| **Active transaction** | An ongoing transaction is preventing log truncation. | The log file cannot be truncated due to active and/or uncommitted transactions. See next section.|
+| **NOTHING** | Typical state. There is nothing blocking the log from truncating. | No. |
+| **CHECKPOINT** | A checkpoint is needed for log truncation. Rare. | No response required unless sustained. If sustained, file a support request with [Azure Support](https://portal.azure.com/#create/Microsoft.Support). |
+| **LOG BACKUP** | A log backup is in progress. | No response required unless sustained. If sustained, file a support request with [Azure Support](https://portal.azure.com/#create/Microsoft.Support). |
+| **ACTIVE BACKUP OR RESTORE** | A database backup is in progress. | No response required unless sustained. If sustained, file a support request with [Azure Support](https://portal.azure.com/#create/Microsoft.Support). |
+| **ACTIVE TRANSACTION** | An ongoing transaction is preventing log truncation. | The log file cannot be truncated due to active and/or uncommitted transactions. See next section.|
+| **REPLICATION** | In Azure SQL Database, likely due to [change data capture (CDC)](/sql/relational-databases/track-changes/about-change-data-capture-sql-server) feature.<BR>In Azure SQL Managed Instance, due to [replication](../managed-instance/replication-transactional-overview.md) or CDC. | In Azure SQL Database, query [sys.dm_cdc_errors](/sql/relational-databases/system-dynamic-management-views/change-data-capture-sys-dm-cdc-errors) and resolve errors. If unresolvable, file a support request with [Azure Support](https://portal.azure.com/#create/Microsoft.Support).<BR>In Azure SQL Managed Instance, if sustained, investigate agents involved with CDC or replication. For troubleshooting CDC, query jobs in [msdb.dbo.cdc_jobs](/sql/relational-databases/system-tables/dbo-cdc-jobs-transact-sql). If not present, add via [sys.sp_cdc_add_job](/sql/relational-databases/system-stored-procedures/sys-sp-cdc-add-job-transact-sql). For replication, consider [Troubleshooting transactional replication](/sql/relational-databases/replication/troubleshoot-tran-repl-errors). If unresolvable, file a support request with [Azure Support](https://portal.azure.com/#create/Microsoft.Support). |
| **AVAILABILITY_REPLICA** | Synchronization to the secondary replica is in progress. | No response required unless sustained. If sustained, file a support request with [Azure Support](https://portal.azure.com/#create/Microsoft.Support). | - ### Log truncation prevented by an active transaction The most common scenario for a transaction log that cannot accept new transactions is a long-running or blocked transaction.
azure-sql Xevent Db Diff From Svr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/xevent-db-diff-from-svr.md
ms.devlang:
- Previously updated : 12/19/2018+ Last updated : 07/23/2021 # Extended events in Azure SQL Database [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
Additional information about extended events is available at:
## Prerequisites
-This topic assumes you already have some knowledge of:
+This article assumes you already have some knowledge of:
- [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) - [Extended events](/sql/relational-databases/extended-events/extended-events)
Prior exposure to the following items is helpful when choosing the Event File as
## Code samples
-Related topics provide two code samples:
+Related articles provide two code samples:
- [Ring Buffer target code for extended events in Azure SQL Database](xevent-code-ring-buffer.md) - Short simple Transact-SQL script.
- - We emphasize in the code sample topic that, when you are done with a Ring Buffer target, you should release its resources by executing an alter-drop `ALTER EVENT SESSION ... ON DATABASE DROP TARGET ...;` statement. Later you can add another instance of Ring Buffer by `ALTER EVENT SESSION ... ON DATABASE ADD TARGET ...`.
+ - We emphasize in the code sample article that, when you are done with a Ring Buffer target, you should release its resources by executing an alter-drop `ALTER EVENT SESSION ... ON DATABASE DROP TARGET ...;` statement. Later you can add another instance of Ring Buffer by `ALTER EVENT SESSION ... ON DATABASE ADD TARGET ...`.
- [Event File target code for extended events in Azure SQL Database](xevent-code-event-file.md)
The extended events feature is supported by several [catalog views](/sql/relatio
| Name of<br/>catalog view | Description | |: |: |
-| **sys.database_event_session_actions** |Returns a row for each action on each event of an event session. |
-| **sys.database_event_session_events** |Returns a row for each event in an event session. |
-| **sys.database_event_session_fields** |Returns a row for each customize-able column that was explicitly set on events and targets. |
-| **sys.database_event_session_targets** |Returns a row for each event target for an event session. |
-| **sys.database_event_sessions** |Returns a row for each event session in the database. |
+| `sys.database_event_session_actions` |Returns a row for each action on each event of an event session. |
+| `sys.database_event_session_events` |Returns a row for each event in an event session. |
+| `sys.database_event_session_fields` |Returns a row for each customize-able column that was explicitly set on events and targets. |
+| `sys.database_event_session_targets` |Returns a row for each event target for an event session. |
+| `sys.database_event_sessions` |Returns a row for each event session in the database. |
-In Microsoft SQL Server, similar catalog views have names that include *.server\_* instead of *.database\_*. The name pattern is like **sys.server_event_%**.
+In Microsoft SQL Server, similar catalog views have names that include *.server\_* instead of *.database\_*. The name pattern is like `sys.server_event_%`.
## New dynamic management views [(DMVs)](/sql/relational-databases/system-dynamic-management-views/system-dynamic-management-views)
Azure SQL Database has [dynamic management views (DMVs)](/sql/relational-databas
| Name of DMV | Description | |: |: |
-| **sys.dm_xe_database_session_event_actions** |Returns information about event session actions. |
-| **sys.dm_xe_database_session_events** |Returns information about session events. |
-| **sys.dm_xe_database_session_object_columns** |Shows the configuration values for objects that are bound to a session. |
-| **sys.dm_xe_database_session_targets** |Returns information about session targets. |
-| **sys.dm_xe_database_sessions** |Returns a row for each event session that is scoped to the current database. |
+| `sys.dm_xe_database_session_event_actions` |Returns information about event session actions. |
+| `sys.dm_xe_database_session_events` |Returns information about session events. |
+| `sys.dm_xe_database_session_object_columns` |Shows the configuration values for objects that are bound to a session. |
+| `sys.dm_xe_database_session_targets` |Returns information about session targets. |
+| `sys.dm_xe_database_sessions` |Returns a row for each event session that is scoped to the current database. |
In Microsoft SQL Server, similar catalog views are named without the *\_database* portion of the name, such as: -- **sys.dm_xe_sessions**, instead of name<br/>**sys.dm_xe_database_sessions**.
+- `sys.dm_xe_sessions` instead of `sys.dm_xe_database_sessions`.
### DMVs common to both For extended events there are additional DMVs that are common to Azure SQL Database, Azure SQL Managed Instance, and Microsoft SQL Server: -- **sys.dm_xe_map_values**-- **sys.dm_xe_object_columns**-- **sys.dm_xe_objects**-- **sys.dm_xe_packages**
+- `sys.dm_xe_map_values`
+- `sys.dm_xe_object_columns`
+- `sys.dm_xe_objects`
+- `sys.dm_xe_packages`
<a name="sqlfindseventsactionstargets" id="sqlfindseventsactionstargets"></a> ## Find the available extended events, actions, and targets
-You can run a simple SQL **SELECT** to obtain a list of the available events, actions, and target.
+To obtain a list of the available events, actions, and target, use the sample query:
```sql SELECT
SELECT
Here are targets that can capture results from your event sessions on Azure SQL Database: -- [Ring Buffer target](/previous-versions/sql/sql-server-2016/bb630339(v=sql.130)) - Briefly holds event data in memory.-- [Event Counter target](/previous-versions/sql/sql-server-2016/ff878025(v=sql.130)) - Counts all events that occur during an extended events session.-- [Event File target](/previous-versions/sql/sql-server-2016/ff878115(v=sql.130)) - Writes complete buffers to an Azure Storage container.
+- [Ring Buffer target](/sql/relational-databases/extended-events/targets-for-extended-events-in-sql-server#ring_buffer-target) - Briefly holds event data in memory.
+- [Event Counter target](/sql/relational-databases/extended-events/targets-for-extended-events-in-sql-server#event_counter-target) - Counts all events that occur during an extended events session.
+- [Event File target](/sql/relational-databases/extended-events/targets-for-extended-events-in-sql-server#event_file-target) - Writes complete buffers to an Azure Storage container.
The [Event Tracing for Windows (ETW)](/dotnet/framework/wcf/samples/etw-tracing) API is not available for extended events on Azure SQL Database.
The [Event Tracing for Windows (ETW)](/dotnet/framework/wcf/samples/etw-tracing)
There are a couple of security-related differences befitting the cloud environment of Azure SQL Database: - Extended events are founded on the single-tenant isolation model. An event session in one database cannot access data or events from another database.-- You cannot issue a **CREATE EVENT SESSION** statement in the context of the **master** database.-
+- You cannot issue a `CREATE EVENT SESSION` statement in the context of the `master` database.
+
## Permission model
-You must have **Control** permission on the database to issue a **CREATE EVENT SESSION** statement. The database owner (dbo) has **Control** permission.
+You must have **Control** permission on the database to issue a `CREATE EVENT SESSION` statement. The database owner (dbo) has **Control** permission.
### Storage container authorizations
The SAS token you generate for your Azure Storage container must specify **rwl**
There are scenarios where intensive use of extended events can accumulate more active memory than is healthy for the overall system. Therefore Azure SQL Database dynamically sets and adjusts limits on the amount of active memory that can be accumulated by an event session. Many factors go into the dynamic calculation.
+There is a cap on memory available to XEvent sessions in Azure SQL Database:
+ - In single Azure SQL Database in the DTU purchasing model, each database can use up to 128 MB. This is raised to 256 MB only in the Premium tier.
+ - In single Azure SQL Database in the vCore purchasing model, each database can use up to 128 MB.
+ - In an elastic pool, individual databases are limited by the single database limits, and in total they cannot exceed 512 MB.
+ If you receive an error message that says a memory maximum was enforced, some corrective actions you can take are: - Run fewer concurrent event sessions.
The **Event File** target might experience network latency or failures while per
## Related links -- [Using Azure PowerShell with Azure Storage](/powershell/module/az.storage/). - [Azure Storage Cmdlets](/powershell/module/Azure.Storage) - [Using Azure PowerShell with Azure Storage](/powershell/module/az.storage/) - [How to use Blob storage from .NET](../../storage/blobs/storage-quickstart-blobs-dotnet.md) - [CREATE CREDENTIAL (Transact-SQL)](/sql/t-sql/statements/create-credential-transact-sql) - [CREATE EVENT SESSION (Transact-SQL)](/sql/t-sql/statements/create-event-session-transact-sql)-- [Jonathan Kehayias' blog posts about extended events in Microsoft SQL Server](https://www.sqlskills.com/blogs/jonathan/category/extended-events/) - The Azure *Service Updates* webpage, narrowed by parameter to Azure SQL Database: - [https://azure.microsoft.com/updates/?service=sql-database](https://azure.microsoft.com/updates/?service=sql-database)
azure-sql How To Content Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/how-to-content-reference-guide.md
In this article you can find a content reference to various guides, scripts, and
## Load data -- [Migrate to Azure SQL Managed Instance](migrate-to-instance-from-sql-server.md): Learn about the recommended migration process and tools for migration to Azure SQL Managed Instance.
+- [SQL Server to Azure SQL Managed Instance Guide](../migration-guides/managed-instance/sql-server-to-managed-instance-guide.md): Learn about the recommended migration process and tools for migration to Azure SQL Managed Instance.
- [Migrate TDE cert to Azure SQL Managed Instance](tde-certificate-migrate.md): If your SQL Server database is protected with transparent data encryption (TDE), you would need to migrate the certificate that SQL Managed Instance can use to decrypt the backup that you want to restore in Azure. - [Import a DB from a BACPAC](../database/database-import.md) - [Export a DB to BACPAC](../database/database-export.md)
azure-sql Migrate To Instance From Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/migrate-to-instance-from-sql-server.md
- Title: Migrate from SQL Server to Azure SQL Managed Instance
-description: Learn how to migrate a database from SQL Server to Azure SQL Managed Instance.
-------- Previously updated : 06/23/2021-
-# SQL Server instance migration to Azure SQL Managed Instance
-
-In this article, you learn about the methods for migrating a SQL Server 2005 or later version instance to [Azure SQL Managed Instance](sql-managed-instance-paas-overview.md). For information on migrating to a single database or elastic pool, see [Migration overview: SQL Server to SQL Database](../migration-guides/database/sql-server-to-sql-database-overview.md). For migration information about migrating from other platforms and guidance on tools and options, see [Migrate to Azure SQL](../migration-guides/index.yml).
-
-> [!NOTE]
-> If you want to quickly start and try Azure SQL Managed Instance, you might want to go to the [quickstart guide](quickstart-content-reference-guide.md) instead of this page.
-
-At a high level, the database migration process looks like:
-
-![Migration process](./media/migrate-to-instance-from-sql-server/migration-process.png)
--- [Assess SQL Managed Instance compatibility](#assess-sql-managed-instance-compatibility) where you should ensure that there are no blocking issues that can prevent your migrations.
-
- This step also includes creation of a [performance baseline](#create-a-performance-baseline) to determine resource usage on your source SQL Server instance. This step is needed if you want to deploy a properly sized managed instance and verify that performance after migration is not affected.
-- [Choose app connectivity options](connect-application-instance.md).-- [Deploy to an optimally sized managed instance](#deploy-to-an-optimally-sized-managed-instance) where you will choose technical characteristics (number of vCores, amount of memory) and performance tier (Business Critical, General Purpose) of your managed instance.-- [Select migration method and migrate](#select-a-migration-method-and-migrate) where you migrate your databases using offline migration (native backup/restore, database import/export) or online migration (Azure Data Migration Service, transactional replication).-- [Monitor applications](#monitor-applications) to ensure that you have expected performance.-
-> [!NOTE]
-> To migrate an individual database into either a single database or an elastic pool, see [Migrate a SQL Server database to Azure SQL Database](../database/migrate-to-database-from-sql-server.md).
-
-## Assess SQL Managed Instance compatibility
-
-First, determine whether SQL Managed Instance is compatible with the database requirements of your application. SQL Managed Instance is designed to provide easy lift and shift migration for the majority of existing applications that use SQL Server. However, you may sometimes require features or capabilities that are not yet supported and the cost of implementing a workaround is too high.
-
-Use [Data Migration Assistant](/sql/dma/dma-overview) to detect potential compatibility issues impacting database functionality on Azure SQL Database. If there are some reported blocking issues, you might need to consider an alternative option, such as [SQL Server on Azure VM](https://azure.microsoft.com/services/virtual-machines/sql-server/). Here are some examples:
--- If you require direct access to the operating system or file system, for instance to install third-party or custom agents on the same virtual machine with SQL Server.-- If you have strict dependency on features that are still not supported, such as FileStream/FileTable, PolyBase, and cross-instance transactions.-- If you absolutely need to stay at a specific version of SQL Server (2012, for instance).-- If your compute requirements are much lower than managed instance offers (one vCore, for instance), and database consolidation is not an acceptable option.-
-If you have resolved all identified migration blockers and are continuing the migration to SQL Managed Instance, note that some of the changes might affect performance of your workload:
--- Mandatory full recovery model and regular automated backup schedule might impact performance of your workload or maintenance/ETL actions if you have periodically used simple/bulk-logged model or stopped backups on demand.-- Different server or database level configurations such as trace flags or compatibility levels.-- New features that you are using such as Transparent Database Encryption (TDE) or auto-failover groups might impact CPU and IO usage.-
-SQL Managed Instance guarantees 99.99% availability even in critical scenarios, so overhead caused by these features cannot be disabled. For more information, see [the root causes that might cause different performance on SQL Server and Azure SQL Managed Instance](https://azure.microsoft.com/blog/key-causes-of-performance-differences-between-sql-managed-instance-and-sql-server/).
-
-#### In-Memory OLTP (Memory-optimized tables)
-
-SQL Server provides In-Memory OLTP capability that allows usage of memory-optimized tables, memory-optimized table types and natively compiled SQL modules to run workloads that have high throughput and low latency transactional processing requirements.
-
-> [!IMPORTANT]
-> In-Memory OLTP is only supported in the Business Critical tier in Azure SQL Managed Instance (and not supported in the General Purpose tier).
-
-If you have memory-optimized tables or memory-optimized table types in your on-premises SQL Server and you are looking to migrate to Azure SQL Managed Instance, you should either:
--- Choose Business Critical tier for your target Azure SQL Managed Instance that supports In-Memory OLTP, Or-- If you want to migrate to General Purpose tier in Azure SQL Managed Instance, remove memory-optimized tables, memory-optimized table types and natively compiled SQL modules that interact with memory-optimized objects before migrating your database(s). The following T-SQL query can be used to identify all objects that need to be removed before migration to General Purpose tier:-
-```tsql
-SELECT * FROM sys.tables WHERE is_memory_optimized=1
-SELECT * FROM sys.table_types WHERE is_memory_optimized=1
-SELECT * FROM sys.sql_modules WHERE uses_native_compilation=1
-```
-
-To learn more about in-memory technologies, see [Optimize performance by using in-memory technologies in Azure SQL Database and Azure SQL Managed Instance](../in-memory-oltp-overview.md)
-
-### Create a performance baseline
-
-If you need to compare the performance of your workload on a managed instance with your original workload running on SQL Server, you would need to create a performance baseline that will be used for comparison.
-
-Performance baseline is a set of parameters such as average/max CPU usage, average/max disk IO latency, throughput, IOPS, average/max page life expectancy, and average max size of tempdb. You would like to have similar or even better parameters after migration, so it is important to measure and record the baseline values for these parameters. In addition to system parameters, you would need to select a set of the representative queries or the most important queries in your workload and measure min/average/max duration and CPU usage for the selected queries. These values would enable you to compare performance of workload running on the managed instance to the original values on your source SQL Server instance.
-
-Some of the parameters that you would need to measure on your SQL Server instance are:
--- [Monitor CPU usage on your SQL Server instance](https://techcommunity.microsoft.com/t5/Azure-SQL-Database/Monitor-CPU-usage-on-SQL-Server/ba-p/680777#M131) and record the average and peak CPU usage.-- [Monitor memory usage on your SQL Server instance](/sql/relational-databases/performance-monitor/monitor-memory-usage) and determine the amount of memory used by different components such as buffer pool, plan cache, column-store pool, [In-Memory OLTP](/sql/relational-databases/in-memory-oltp/monitor-and-troubleshoot-memory-usage), etc. In addition, you should find average and peak values of the Page Life Expectancy memory performance counter.-- Monitor disk IO usage on the source SQL Server instance using [sys.dm_io_virtual_file_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-io-virtual-file-stats-transact-sql) view or [performance counters](/sql/relational-databases/performance-monitor/monitor-disk-usage).-- Monitor workload and query performance or your SQL Server instance by examining Dynamic Management Views or Query Store if you are migrating from a SQL Server 2016+ version. Identify average duration and CPU usage of the most important queries in your workload to compare them with the queries that are running on the managed instance.-
-> [!Note]
-> If you notice any issue with your workload on SQL Server such as high CPU usage, constant memory pressure, or tempdb or parameterization issues, you should try to resolve them on your source SQL Server instance before taking the baseline and migration. Migrating known issues to any new system might cause unexpected results and invalidate any performance comparison.
-
-As an outcome of this activity, you should have documented average and peak values for CPU, memory, and IO usage on your source system, as well as average and max duration and CPU usage of the dominant and the most critical queries in your workload. You should use these values later to compare performance of your workload on a managed instance with the baseline performance of the workload on the source SQL Server instance.
-
-## Deploy to an optimally sized managed instance
-
-SQL Managed Instance is tailored for on-premises workloads that are planning to move to the cloud. It introduces a [new purchasing model](../database/service-tiers-vcore.md) that provides greater flexibility in selecting the right level of resources for your workloads. In the on-premises world, you are probably accustomed to sizing these workloads by using physical cores and IO bandwidth. The purchasing model for managed instance is based upon virtual cores, or "vCores," with additional storage and IO available separately. The vCore model is a simpler way to understand your compute requirements in the cloud versus what you use on-premises today. This new model enables you to right-size your destination environment in the cloud. Some general guidelines that might help you to choose the right service tier and characteristics are described here:
--- Based on the baseline CPU usage, you can provision a managed instance that matches the number of cores that you are using on SQL Server, having in mind that CPU characteristics might need to be scaled to match [VM characteristics where the managed instance is installed](resource-limits.md#hardware-generation-characteristics).-- Based on the baseline memory usage, choose [the service tier that has matching memory](resource-limits.md#hardware-generation-characteristics). The amount of memory cannot be directly chosen, so you would need to select the managed instance with the amount of vCores that has matching memory (for example, 5.1 GB/vCore in Gen5).-- Based on the baseline IO latency of the file subsystem, choose between the General Purpose (latency greater than 5 ms) and Business Critical (latency less than 3 ms) service tiers.-- Based on baseline throughput, pre-allocate the size of data or log files to get expected IO performance.-
-You can choose compute and storage resources at deployment time and then change it afterward without introducing downtime for your application using the [Azure portal](../database/scale-resources.md):
-
-![Managed instance sizing](./media/migrate-to-instance-from-sql-server/managed-instance-sizing.png)
-
-To learn how to create the VNet infrastructure and a managed instance, see [Create a managed instance](instance-create-quickstart.md).
-
-> [!IMPORTANT]
-> It is important to keep your destination VNet and subnet in accordance with [managed instance VNet requirements](connectivity-architecture-overview.md#network-requirements). Any incompatibility can prevent you from creating new instances or using those that you already created. Learn more about [creating new](virtual-network-subnet-create-arm-template.md) and [configuring existing](vnet-existing-add-subnet.md) networks.
-
-## Select a migration method and migrate
-
-SQL Managed Instance targets user scenarios requiring mass database migration from on-premises or Azure VM database implementations. They are the optimal choice when you need to lift and shift the back end of the applications that regularly use instance level and/or cross-database functionalities. If this is your scenario, you can move an entire instance to a corresponding environment in Azure without the need to re-architect your applications.
-
-To move SQL instances, you need to plan carefully:
--- The migration of all databases that need to be collocated (ones running on the same instance).-- The migration of instance-level objects that your application depends on, including logins, credentials, SQL Agent jobs and operators, and server-level triggers.-
-SQL Managed Instance is a managed service that allows you to delegate some of the regular DBA activities to the platform as they are built in. Therefore, some instance-level data does not need to be migrated, such as maintenance jobs for regular backups or Always On configuration, as [high availability](../database/high-availability-sla.md) is built in.
-
-SQL Managed Instance supports the following database migration options (currently these are the only supported migration methods):
--- Azure Database Migration Service - migration with near-zero downtime.-- Native `RESTORE DATABASE FROM URL` - uses native backups from SQL Server and requires some downtime.-
-### Azure Database Migration Service
-
-[Azure Database Migration Service](../../dms/dms-overview.md) is a fully managed service designed to enable seamless migrations from multiple database sources to Azure data platforms with minimal downtime. This service streamlines the tasks required to move existing third-party and SQL Server databases to Azure. Deployment options at public preview include databases in Azure SQL Database and SQL Server databases in an Azure virtual machine. Database Migration Service is the recommended method of migration for your enterprise workloads.
-
-If you use SQL Server Integration Services (SSIS) on SQL Server on premises, Database Migration Service does not yet support migrating the SSIS catalog (SSISDB) that stores SSIS packages, but you can provision Azure-SSIS Integration Runtime (IR) in Azure Data Factory, which will create a new SSISDB in a managed instance so you can redeploy your packages to it. See [Create Azure-SSIS IR in Azure Data Factory](../../data-factory/create-azure-ssis-integration-runtime.md).
-
-To learn more about this scenario and configuration steps for Database Migration Service, see [Migrate your on-premises database to managed instance using Database Migration Service](../../dms/tutorial-sql-server-to-managed-instance.md).
-
-### Native RESTORE from URL
-
-RESTORE of native backups (.bak files) taken from a SQL Server instance, available on [Azure Storage](https://azure.microsoft.com/services/storage/), is one of the key capabilities of SQL Managed Instance that enables quick and easy offline database migration.
-
-The following diagram provides a high-level overview of the process:
-
-![Diagram shows SQL Server with an arrow labeled BACKUP / Upload to URL flowing to Azure Storage and a second arrow labeled RESTORE from URL flowing from Azure Storage to a Managed Instance of SQL.](./media/migrate-to-instance-from-sql-server/migration-flow.png)
-
-The following table provides more information regarding the methods you can use depending on source SQL Server version you are running:
-
-|Step|SQL Engine and version|Backup/restore method|
-||||
-|Put backup to Azure Storage|Prior to 2012 SP1 CU2|Upload .bak file directly to Azure Storage|
-||2012 SP1 CU2 - 2016|Direct backup using deprecated [WITH CREDENTIAL](/sql/t-sql/statements/restore-statements-transact-sql) syntax|
-||2016 and above|Direct backup using [WITH SAS CREDENTIAL](/sql/relational-databases/backup-restore/sql-server-backup-to-url)|
-|Restore from Azure Storage to a managed instance|[RESTORE FROM URL with SAS CREDENTIAL](restore-sample-database-quickstart.md)|
-
-> [!IMPORTANT]
->
-> - When you're migrating a database protected by [Transparent Data Encryption](../database/transparent-data-encryption-tde-overview.md) to a managed instance using native restore option, the corresponding certificate from the on-premises or Azure VM SQL Server needs to be migrated before database restore. For detailed steps, see [Migrate a TDE cert to a managed instance](tde-certificate-migrate.md).
-> - Restore of system databases is not supported. To migrate instance-level objects (stored in master or msdb databases), we recommend to script them out and run T-SQL scripts on the destination instance.
-
-For a quickstart showing how to restore a database backup to a managed instance using a SAS credential, see [Restore from backup to a managed instance](restore-sample-database-quickstart.md).
-
-> [!VIDEO https://www.youtube.com/embed/RxWYojo_Y3Q]
-
-## Monitor applications
-
-Once you have completed the migration to a managed instance, you should track the application behavior and performance of your workload. This process includes the following activities:
--- [Compare performance of the workload running on the managed instance](#compare-performance-with-the-baseline) with the [performance baseline that you created on the source SQL Server instance](#create-a-performance-baseline).-- Continuously [monitor performance of your workload](#monitor-performance) to identify potential issues and improvement.-
-### Compare performance with the baseline
-
-The first activity that you would need to take immediately after successful migration is to compare the performance of the workload with the baseline workload performance. The goal of this activity is to confirm that the workload performance on your managed instance meets your needs.
-
-Database migration to a managed instance keeps database settings and its original compatibility level in majority of cases. The original settings are preserved where possible in order to reduce the risk of some performance degradations compared to your source SQL Server instance. If the compatibility level of a user database was 100 or higher before the migration, it remains the same after migration. If the compatibility level of a user database was 90 before migration, in the upgraded database, the compatibility level is set to 100, which is the lowest supported compatibility level in a managed instance. Compatibility level of system databases is 140. Since migration to a managed instance is actually migrating to the latest version of the SQL Server database engine, you should be aware that you need to re-test performance of your workload to avoid some surprising performance issues.
-
-As a prerequisite, make sure that you have completed the following activities:
--- Align your settings on the managed instance with the settings from the source SQL Server instance by investigating various instance, database, tempdb settings, and configurations. Make sure that you have not changed settings like compatibility levels or encryption before you run the first performance comparison, or accept the risk that some of the new features that you enabled might affect some queries. To reduce migration risks, change the database compatibility level only after performance monitoring.-- Implement [storage best practice guidelines for General Purpose](https://techcommunity.microsoft.com), such as pre-allocating the size of the files to get better performance.-- Learn about the [key environment differences that might cause the performance differences between a managed instance and SQL Server](https://azure.microsoft.com/blog/key-causes-of-performance-differences-between-sql-managed-instance-and-sql-server/), and identify the risks that might affect the performance.-- Make sure that you keep enabled Query Store and automatic tuning on your managed instance. These features enable you to measure workload performance and automatically fix the potential performance issues. Learn how to use Query Store as an optimal tool for getting information about workload performance before and after database compatibility level change, as explained in [Keep performance stability during the upgrade to a newer SQL Server version](/sql/relational-databases/performance/query-store-usage-scenarios#CEUpgrade).
-Once you have prepared the environment that is comparable as much as possible with your on-premises environment, you can start running your workload and measure performance. Measurement process should include the same parameters that you measured [while you created baseline performance of your workload measures on the source SQL Server instance](#create-a-performance-baseline).
-As a result, you should compare performance parameters with the baseline and identify critical differences.
-
-> [!NOTE]
-> In many cases, you would not be able to get exactly matching performance on the managed instance and SQL Server. Azure SQL Managed Instance is a SQL Server database engine, but infrastructure and high-availability configuration on a managed instance may introduce some differences. You might expect that some queries would be faster while some others might be slower. The goal of comparison is to verify that workload performance in the managed instance matches the performance on SQL Server (on average), and identify any critical queries with the performance that don't match your original performance.
-
-The outcome of the performance comparison might be:
--- Workload performance on the managed instance is aligned or better than the workload performance on SQL Server. In this case, you have successfully confirmed that migration is successful.-- The majority of the performance parameters and the queries in the workload work fine, with some exceptions with degraded performance. In this case, you would need to identify the differences and their importance. If there are some important queries with degraded performance, you should investigate whether the underlying SQL plans changed or the queries are hitting some resource limits. Mitigation in this case could be to apply some hints on the critical queries (for example, changed compatibility level, legacy cardinality estimator) either directly or using plan guides, rebuild or create statistics and indexes that might affect the plans.-- Most of the queries are slower on a managed instance compared to your source SQL Server instance. In this case, try to identify the root causes of the difference such as [reaching some resource limit](resource-limits.md#service-tier-characteristics) like IO limits, memory limit, instance log rate limit, etc. If there are no resource limits that can cause the difference, try to change the compatibility level of the database or change database settings like legacy cardinality estimation and re-start the test. Review the recommendations provided by the managed instance or Query Store views to identify the queries that regressed performance.-
-> [!IMPORTANT]
-> Azure SQL Managed Instance has a built-in automatic plan correction feature that is enabled by default. This feature ensures that queries that worked fine in the paste would not degrade in the future. Make sure that this feature is enabled and that you have executed the workload long enough with the old settings before you change new settings in order to enable the managed instance to learn about the baseline performance and plans.
-
-Make the change of the parameters or upgrade service tiers to converge to the optimal configuration until you get the workload performance that fits your needs.
-
-### Monitor performance
-
-SQL Managed Instance provides a lot of advanced tools for monitoring and troubleshooting, and you should use them to monitor performance on your instance. Some of the parameters that you would need to monitor are:
--- CPU usage on the instance to determine if the number of vCores that you provisioned is the right match for your workload.-- Page-life expectancy on your managed instance to determine [if you need additional memory](https://techcommunity.microsoft.com/t5/Azure-SQL-Database/Do-you-need-more-memory-on-Azure-SQL-Managed-Instance/ba-p/563444).-- Statistics like `INSTANCE_LOG_GOVERNOR` or `PAGEIOLATCH` that will tell if you have storage IO issues, especially on the General Purpose tier, where you might need to pre-allocate files to get better IO performance.-
-## Leverage advanced PaaS features
-
-Once you are on a fully managed platform and you have verified that workload performances are matching your SQL Server workload performance, use advantages that are provided automatically as part of the service.
-
-Even if you don't make some changes in managed instance during the migration, there are high chances that you would turn on some of the new features while you are operating your instance to take advantage of the latest database engine improvements. Some changes are only enabled once the [database compatibility level has been changed](/sql/relational-databases/databases/view-or-change-the-compatibility-level-of-a-database).
-
-For instance, you don't have to create backups on managed instance - the service performs backups for you automatically. You no longer must worry about scheduling, taking, and managing backups. SQL Managed Instance provides you the ability to restore to any point in time within this retention period using [Point in Time Recovery (PITR)](../database/recovery-using-backups.md#point-in-time-restore). Additionally, you do not need to worry about setting up high availability, as [high availability](../database/high-availability-sla.md) is built in.
-
-To strengthen security, consider using [Azure Active Directory Authentication](../database/security-overview.md), [auditing](auditing-configure.md), [threat detection](../database/azure-defender-for-sql.md), [row-level security](/sql/relational-databases/security/row-level-security), and [dynamic data masking](/sql/relational-databases/security/dynamic-data-masking).
-
-In addition to advanced management and security features, a managed instance provides a set of advanced tools that can help you to [monitor and tune your workload](../database/monitor-tune-overview.md). [Azure SQL Analytics](../../azure-monitor/insights/azure-sql.md) enables you to monitor a large set of managed instances and centralize monitoring of a large number of instances and databases. [Automatic tuning](/sql/relational-databases/automatic-tuning/automatic-tuning#automatic-plan-correction) in managed instances continuously monitors performance of your SQL plan execution statistics and automatically fixes the identified performance issues.
-
-## Next steps
--- For information about Azure SQL Managed Instance, see [What is Azure SQL Managed Instance?](sql-managed-instance-paas-overview.md).-- For a tutorial that includes a restore from backup, see [Create a managed instance](instance-create-quickstart.md).-- For tutorial showing migration using Database Migration Service, see [Migrate your on-premises database to Azure SQL Managed Instance using Database Migration Service](../../dms/tutorial-sql-server-to-managed-instance.md).
azure-sql Quickstart Content Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/quickstart-content-reference-guide.md
However, in order to migrate your production database or even dev/test databases
- Performance testing - You should measure baseline performance metrics on your source SQL Server instance and compare them with the performance metrics on the destination SQL Managed Instance where you have migrated the database. Learn more about the [best practices for performance comparison](https://techcommunity.microsoft.com/t5/Azure-SQL-Database/The-best-practices-for-performance-comparison-between-Azure-SQL/ba-p/683210). - Online migration - With the native `RESTORE` described in this article, you have to wait for the databases to be restored (and copied to Azure Blob storage if not already stored there). This causes some downtime of your application especially for larger databases. To move your production database, use the [Data Migration service (DMS)](../../dms/tutorial-sql-server-to-managed-instance.md?toc=%2fazure%2fsql-database%2ftoc.json) to migrate your database with the minimal downtime. DMS accomplishes this by incrementally pushing the changes made in your source database to the SQL Managed Instance database being restored. This way, you can quickly switch your application from source to target database with minimal downtime.
-Learn more about the [recommended migration process](migrate-to-instance-from-sql-server.md).
+Learn more about the [recommended migration process](../migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
## Next steps
azure-sql Restore Sample Database Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/restore-sample-database-quickstart.md
In this quickstart, you'll use SQL Server Management Studio (SSMS) to restore a
> [!VIDEO https://www.youtube.com/embed/RxWYojo_Y3Q] > [!NOTE]
-> For more information on migration using Azure Database Migration Service, see [SQL Managed Instance migration using Database Migration Service](../../dms/tutorial-sql-server-to-managed-instance.md).
-> For more information on various migration methods, see [SQL Server migration to Azure SQL Managed Instance](migrate-to-instance-from-sql-server.md).
+> For more information on migration using Azure Database Migration Service, see [Tutorial: Migrate SQL Server to an Azure Managed Instance using Database Migration Service](../../dms/tutorial-sql-server-to-managed-instance.md).
+> For more information on various migration methods, see [SQL Server to Azure SQL Managed Instance Guide](../migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
## Prerequisites
azure-sql Sql Managed Instance Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/sql-managed-instance-paas-overview.md
Azure SQL Managed Instance provides a set of advanced security features that can
- [Row-level security](/sql/relational-databases/security/row-level-security) (RLS) enables you to control access to rows in a database table based on the characteristics of the user executing a query (such as by group membership or execution context). RLS simplifies the design and coding of security in your application. RLS enables you to implement restrictions on data row access. For example, ensuring that workers can access only the data rows that are pertinent to their department, or restricting a data access to only the relevant data. - [Transparent data encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption-azure-sql) encrypts SQL Managed Instance data files, known as encrypting data at rest. TDE performs real-time I/O encryption and decryption of the data and log files. The encryption uses a database encryption key (DEK), which is stored in the database boot record for availability during recovery. You can protect all your databases in a managed instance with transparent data encryption. TDE is proven encryption-at-rest technology in SQL Server that is required by many compliance standards to protect against theft of storage media.
-Migration of an encrypted database to SQL Managed Instance is supported via Azure Database Migration Service or native restore. If you plan to migrate an encrypted database using native restore, migration of the existing TDE certificate from the SQL Server instance to SQL Managed Instance is a required step. For more information about migration options, see [SQL Server migration to SQL Managed Instance](migrate-to-instance-from-sql-server.md).
+Migration of an encrypted database to SQL Managed Instance is supported via Azure Database Migration Service or native restore. If you plan to migrate an encrypted database using native restore, migration of the existing TDE certificate from the SQL Server instance to SQL Managed Instance is a required step. For more information about migration options, see [SQL Server to Azure SQL Managed Instance Guide](../migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
## Azure Active Directory integration
azure-sql Sql Server To Managed Instance Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md
The test approach for database migration consists of the following activities:
Be sure to take advantage of the advanced cloud-based features offered by SQL Managed Instance, such as [built-in high availability](../../database/high-availability-sla.md), [threat detection](../../database/azure-defender-for-sql.md), and [monitoring and tuning your workload](../../database/monitor-tune-overview.md).
-[Azure SQL Analytics](../../../azure-monitor/insights/azure-sql.md) allows you to monitor a large set of managed instances in a centralized manner.
+[Azure SQL Analytics](../../../azure-sql/database/monitor-tune-overview.md) allows you to monitor a large set of managed instances in a centralized manner.
Some SQL Server features are only available once the [database compatibility level](/sql/relational-databases/databases/view-or-change-the-compatibility-level-of-a-database) is changed to the latest compatibility level (150).
communication-services Call Logs Azure Monitor Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/call-logs-azure-monitor-access.md
+
+ Title: Azure Communication Services - Enable and Access Call Summary and Call Diagnostic Logs
+
+description: How to access Call Summary and Call Diagnostic logs in Azure Monitor
++++ Last updated : 07/22/2021+++
+# Enable and Access Call Summary and Call Diagnostic Logs
++
+To access telemetry for Azure Communication Services Voice & Video resources, follow these steps.
+
+## Enable logging
+1. First, you will need to create a storage account for your logs. Go to [Create a storage account](https://docs.microsoft.com/azure/storage/common/storage-account-create?tabs=azure-portal) for instructions to complete this step. See also [Storage account overview](https://docs.microsoft.com/azure/storage/common/storage-account-overview) for more information on the types and features of different storage options. If you already have an Azure storage account go to Step 2.
+
+1. When you've created your storage account, next you need to enable logging by following the instructions in [Enable diagnostic logs in your resource](https://docs.microsoft.com/azure/communication-services/concepts/logging-and-diagnostics#enable-diagnostic-logs-in-your-resource). You will select the check boxes for the logs "CallSummaryPRIVATEPREVIEW" and "CallDiagnosticPRIVATEPREVIEW".
+
+1. Next, select the "Archive to a storage account" box and then select the storage account for your logs in the drop-down menu below. The "Send to Analytics workspace" option isn't currently available for Private Preview of this feature, but it will be made available when this feature is made public.
++++
+## Access Your Logs
+
+To access your logs, go to the storage account you designated in Step 3 above by navigating to [Storage Accounts](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Storage%2FStorageAccounts) in the Azure portal.
++
+From there, you can download all logs or individual logs.
++
+## Next Steps
+
+- Learn more about [Logging and Diagnostics](./logging-and-diagnostics.md)
communication-services Call Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/call-logs-azure-monitor.md
+
+ Title: Azure Communication Services - Call Summary and Call Diagnostic Logs
+
+description: Learn about Call Summary and Call Diagnostic Logs in Azure Monitor
++++ Last updated : 07/22/2021++++
+# Call Summary and Call Diagnostic Logs
++
+## Data Concepts
+
+### Entities and IDs
+
+A *Call*, as it relates to the entities represented in the data, is an abstraction represented by the `correlationId`. `CorrelationId`s are unique per Call, and are time-bound by `callStartTime` and `callDuration`. Every Call is an event that contains data from two or more *Endpoints*, which represent the various human, bot, or server participants in the Call.
+
+A *Participant* (`participantId`) is present only when the Call is a *Group* Call, as it represents the connection between an Endpoint and the server.
+
+An *Endpoint* is the most unique entity, represented by `endpointId`. `EndpointType` tells you whether the Endpoint represents a human user (PSTN, VoIP), a Bot (Bot), or the server that is managing multiple Participants within a Call. When an `endpointType` is `"Server"`, the Endpoint will not be assigned a unique ID. By looking at `endpointType` and the number of `endpointId`s, you can always determine how many users and other non-human Participants (bots, servers) are on the Call. Native SDKs (like the Android calling SDK) reuse the same `endpointId` for a user across multiple Calls, thus enabling an understanding of experience across sessions. This differs from web-based Endpoints, which will always generate a new `endpointId` for each new Call.
+
+A *Stream* is the most granular entity, as there is one Stream per direction (inbound/outbound) and `mediaType` (e.g. audio, video).
+
+### P2P vs. Group Calls
+
+There are two types of Calls (represented by `callType`): P2P and Group.
+
+**P2P** calls are a connection between only two Endpoints, with no server Endpoint. P2P calls are initiated as a Call between those Endpoints and are not created as a group Call event prior to the connection.
++
+**Group** Calls include any Call that's created ahead of time as a meeting/calendar event and any Call that has more than 2 Endpoints connected. Group Calls will include a server Endpoint, and the connection between each Endpoint and the server constitutes a Participant. P2P Calls that add an additional Endpoint during the Call cease to be P2P, and they become a Group Call. By viewing the `participantStartTime` and `participantDuration`, the timeline of when each Endpoint joined the Call can be determined.
++
+## Log Structure
+Two types of logs are created: **Call Summary** logs and **Call Diagnostic** logs.
+
+Call Summary Logs contain basic information about the Call, including all the relevant IDs, timestamps, Endpoint and SDK information. For each Endpoint within a Call (not counting the Server), a distinct Call Summary Log will be created.
+
+Call Diagnostic Logs contain information about the Stream as well as a set of metrics that indicate quality of experience measurements. For each Endpoint within a Call (including the server), a distinct Call Diagnostic Log is created for each data stream (audio, video, etc.) between Endpoints. In a P2P Call, each log contains data relating to each of the outbound stream(s) associated with each Endpoint. In a Group Call, each stream associated with `endpointType`= `"Server"` will create a log containing data for the inbound streams, and all other streams will create logs containing data for the outbound streams for all non-sever endpoints. In Group Calls, use the `participantId` as the key to join the related inbound/outbound logs into a distinct Participant connection.
+
+### Example 1: P2P Call
+
+The below diagram represents two endpoints connected directly in a P2P Call. In this example, 2 Call Summary Logs would be created (1 per `endpointId`) and 4 Call Diagnostic Logs would be created (1 per media stream). Each log will contain data relating to the outbound stream of the `endpointId`.
+++
+### Example 2: Group Call
+
+The below diagram represents a Group Call example with three `particpantIds`, which means three `endpointIds` (`endpointIds` can potentially appear in multiple Participants, e.g. when rejoining a Call from the same device) and a Server Endpoint. For `participantId` 1, two Call Summary Logs would be created: one for for `endpointId`, and another for the server. Four Call Diagnostic Logs would be created relating to `participantId` 1, one for each media stream. The three logs with `endpointId` 1 would contain data relating to the outbound media streams, and the one log with `endpointId = null, endpointType = "Server"` would contain data relating to the inbound stream.
++
+## Data Definitions
+
+### Call Summary Log
+The Call Summary Log contains data to help you identify key properties of all Calls. A different Call Summary Log will be created per each `participantId` (`endpointId` in the case of P2P calls) in the Call.
+
+| Property | Description |
+|-||
+| time | The timestamp (UTC) of when the log was generated. |
+| operationName | The operation associated with log record. |
+| operationVersion | The api-version associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. |
+| correlationIdentifier | The `correlationIdentifier` identifies correlated events from all of the participants and endpoints that connect during a single Call. `correlationIdentifier` is the unique ID for a Call. If you ever need to open a support case with Microsoft, the `correlationID` will be used to easily identify the Call you're troubleshooting. |
+| identifier | This is the unique ID for the user, matching the identity assigned by the Authentication service (MRI). |
+| callStartTime | A timestamp for the start of the call, based on the first attempted connection from any Endpoint. |
+| callDuration | The duration of the Call expressed in seconds, based on the first attempted connection and end of the last connection between two endpoints. |
+| callType | Will contain either `"P2P"` or `"Group"`. A `"P2P"` Call is a direct 1:1 connection between only two, non-server endpoints. A `"Group"` Call is a Call that has more than two endpoints or is created as `"Group"` Call prior to the connection. |
+| teamsThreadId | This ID is only relevant when the Call is organized as a Microsoft Teams meeting, representing the Microsoft Teams ΓÇô Azure Communication Services interoperability use-case. This ID is exposed in operational logs. You can also get this ID through the Chat APIs. |
+| participantId | This ID is generated to represent the two-way connection between a `"Participant"` Endpoint (`endpointType` = `"Server"`) and the server. When `callType` = `"P2P"`, there is a direct connection between two endpoints, and no `participantId` is generated. |
+| participantStartTime | Timestamp for beginning of the first connection attempt by the participant. |
+| participantDuration | The duration of each Participant connection in seconds, from `participantStartTime` to the timestamp when the connection is ended. |
+| participantEndReason | Contains Calling SDK error codes emitted by the SDK when relevant for each `participantId`. See Calling SDK error codes below. |
+| endpointId | Unique ID that represents each Endpoint connected to the call, where the Endpoint type is defined by `endpointType`. When the value is `null`, the connected entity is the Communication Services server (`endpointType`= `"Server"`). `EndpointId` can sometimes persist for the same user across multiple calls (`correlationIdentifier`) for native clients. The number of `endpointId`s will determine the number of Call Summary Logs. A distinct Summary Log is created for each `endpointId`. |
+| endpointType | This value describes the properties of each Endpoint connected to the Call. Can contain `"Server"`, `"VOIP"`, `"PSTN"`, `"BOT"`, or `"Unknown"`. |
+| sdkVersion | Version string for the Communication Services Calling SDK version used by each relevant Endpoint. (Example: `"1.1.00.20212500"`) |
+| osVersion | String that represents the operating system and version of each Endpoint device. |
+
+### Call Diagnostic Log
+Call Diagnostic Logs provide important information about the Endpoints and the media transfers for each Participant, as well as measurements that help to understand quality issues.
+
+| Property | Description |
+||-|
+| operationName | The operation associated with log record. |
+| operationVersion | The `api-version` associated with the operation, if the `operationName` was performed using an API. If there is no API that corresponds to this operation, the version represents the version of that operation in case the properties associated with the operation change in the future. |
+| category | The log category of the event. Category is the granularity at which you can enable or disable logs on a particular resource. The properties that appear within the `properties` blob of an event are the same within a particular log category and resource type. |
+| correlationIdentifier | The `correlationIdentifier` identifies correlated events from all of the participants and endpoints that connect during a single Call. `correlationIdentifier` is the unique ID for a Call. If you ever need to open a support case with Microsoft, the `correlationID` will be used to easily identify the Call you're troubleshooting. |
+| participantId | This ID is generated to represent the two-way connection between a "Participant" Endpoint (`endpointType` = `ΓÇ£ServerΓÇ¥`) and the server. When `callType` = `"P2P"`, there is a direct connection between two endpoints, and no `participantId` is generated. |
+| identifier | This ID represents the user identity, as defined by the Authentication service. Use this ID to correlate different events across calls and services. |
+| endpointId | Unique ID that represents each Endpoint connected to the call, with Endpoint type defined by `endpointType`. When the value is `null`, it means that the connected entity is the Communication Services server. `EndpointId` can persist for the same user across multiple calls (`correlationIdentifier`) for native clients but will be unique for every Call when the client is a web browser. |
+| endpointType | This value describes the properties of each `endpointId`. Can contain `ΓÇ£ServerΓÇ¥`, `ΓÇ£VOIPΓÇ¥`, `ΓÇ£PSTNΓÇ¥`, `ΓÇ£BOTΓÇ¥`, or `ΓÇ£UnknownΓÇ¥`. |
+| mediaType | This string value describes the type of media being transmitted between endpoints within each stream. Possible values include `ΓÇ£AudioΓÇ¥`, `ΓÇ£VideoΓÇ¥`, `ΓÇ£VBSSΓÇ¥` (Video-Based Screen Sharing), and `ΓÇ£AppSharingΓÇ¥`. |
+| streamId | Non-unique integer which, together with `mediaType`, can be used to uniquely identify streams of the same `participantId`. |
+| transportType | String value which describes the network transport protocol per `participantId`. Can contain `"UDPΓÇ¥`, `ΓÇ£TCPΓÇ¥`, or `ΓÇ£UnrecognizedΓÇ¥`. `"Unrecognized"` indicates that the system could not determine if the `transportType` was TCP or UDP. |
+| roundTripTimeAvg | This is the average time it takes to get an IP packet from one Endpoint to another within a `participantDuration`. This network propagation delay is essentially tied to physical distance between the two points and the speed of light, including additional overhead taken by the various routers in between. The latency is measured as one-way or Round-trip Time (RTT). Its value expressed in milliseconds, and an RTT greater than 500ms should be considered as negatively impacting the Call quality. |
+| roundTripTimeMax | The maximum RTT (ms) measured per media stream during a `participantDuration` in a group Call or `callDuration` in a P2P Call. |
+| jitterAvg | This is the average change in delay between successive packets. Azure Communication Services can adapt to some levels of jitter through buffering. It's only when the jitter exceeds the buffering, which is approximately at `jitterAvg` >30 ms, that a negative quality impact is likely occurring. The packets arriving at different speeds cause a speaker's voice to sound robotic. This is measured per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. |
+| jitterMax | The is the maximum jitter value measured between packets per media stream. Bursts in network conditions can cause issues in the audio/video traffic flow. |
+| packetLossRateAvg | This is the average percentage of packets that are lost. Packet loss directly affects audio qualityΓÇöfrom small, individual lost packets that have almost no impact to back-to-back burst losses that cause audio to cut out completely. The packets being dropped and not arriving at their intended destination cause gaps in the media, resulting in missed syllables and words, and choppy video and sharing. A packet loss rate of greater than 10% (0.1) should be considered a rate that's likely having a negative quality impact. This is measured per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. |
+| packetLossRateMax | This value represents the maximum packet loss rate (%) per media stream over the `participantDuration` in a group Call or `callDuration` in a P2P Call. Bursts in network conditions can cause issues in the audio/video traffic flow. |
+| | |
+
+### Error Codes
+The `participantEndReason` will contain a value from the set of Calling SDK error codes. You can refer to these codes to troubleshoot issues during the call, per Endpoint.
+
+| Error code | Description | Action to take |
+|--|--|--|
+| 0 | Success | Call (P2P) or Participant (Group) terminated correctly. |
+| 403 | Forbidden / Authentication failure. | Ensure that your Communication Services token is valid and not expired. If you are using Teams Interoperability, make sure your Teams tenant has been added to the preview access allowlist. To enable/disable Teams tenant interoperability, complete this form. |
+| 404 | Call not found. | Ensure that the number you're calling (or Call you're joining) exists. |
+| 408 | Call controller timed out. | Call Controller timed out waiting for protocol messages from user endpoints. Ensure clients are connected and available. |
+| 410 | Local media stack or media infrastructure error. | Ensure that you're using the latest SDK in a supported environment. |
+| 430 | Unable to deliver message to client application. | Ensure that the client application is running and available. |
+| 480 | Remote client Endpoint not registered. | Ensure that the remote Endpoint is available. |
+| 481 | Failed to handle incoming Call. | File a support request through the Azure portal. |
+| 487 | Call canceled, locally declined, ended due to an Endpoint mismatch issue, or failed to generate media offer. | Expected behavior. |
+| 490, 491, 496, 487, 498 | Local Endpoint network issues. | Check your network. |
+| 500, 503, 504 | Communication Services infrastructure error. | File a support request through the Azure portal. |
+| 603 | Call globally declined by remote Communication Services Participant. | Expected behavior. |
+| Unknown | Non-standard end reason (not part of the standard SIP codes). | |
+
+## Call Examples and Sample Data
+
+### P2P Call
++
+Shared fields for all logs in the call:
+
+```
+"time": "2021-07-19T18:46:50.188Z",
+"resourceId": "SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/ACS-PROD-CCTS-TESTS/PROVIDERS/MICROSOFT.COMMUNICATION/COMMUNICATIONSERVICES/ACS-PROD-CCTS-TESTS",
+"correlationId": "8d1a8374-344d-4502-b54b-ba2d6daaf0ae",
+```
+
+#### Call Summary Logs
+Call Summary Logs have shared operation and category information:
+
+```
+"operationName": "CallSummary",
+"operationVersion": "1.0",
+"category": "CallSummaryPRIVATEPREVIEW",
+
+```
+Call Summary for VoIP user 1
+```
+"properties": {
+ "identifier": "acs:61fddbe3-0003-4066-97bc-6aaf143bbb84_0000000b-4fee-66cf-ac00-343a0d003158",
+ "callStartTime": "2021-07-19T17:54:05.113Z",
+ "callDuration": 6,
+ "callType": "P2P",
+ "teamsThreadId": "null",
+ "participantId": "null",
+ "participantStartTime": "2021-07-19T17:54:06.758Z",
+ "participantDuration": "5",
+ "participantEndReason": "0",
+ "endpointId": "570ea078-74e9-4430-9c67-464ba1fa5859",
+ "endpointType": "VoIP",
+ "sdkVersion": "1.0.1.0",
+ "osVersion": "Windows 10.0.17763 Arch: x64"
+}
+```
+
+Call summary for VoIP user 2
+```
+"properties": {
+ "identifier": "acs:7af14122-9ac7-4b81-80a8-4bf3582b42d0_06f9276d-8efe-4bdd-8c22-ebc5434903f0",
+ "callStartTime": "2021-07-19T17:54:05.335Z",
+ "callDuration": 6,
+ "callType": "P2P",
+ "teamsThreadId": "null",
+ "participantId": "null",
+ "participantStartTime": "2021-07-19T17:54:06.335Z",
+ "participantDuration": "5",
+ "participantEndReason": "0",
+ "endpointId": "a5bd82f9-ac38-4f4a-a0fa-bb3467cdcc64",
+ "endpointType": "VoIP",
+ "sdkVersion": "1.1.0.0",
+ "osVersion": "null"
+}
+```
+#### Call Diagnostic Logs
+Call diagnostics logs share operation information:
+```
+"operationName": "CallDiagnostics",
+"operationVersion": "1.0",
+"category": "CallDiagnosticsPRIVATEPREVIEW",
+```
+Diagnostic log for audio stream from VoIP Endpoint 1 to VoIP Endpoint 2:
+```
+"properties": {
+ "identifier": "acs:61fddbe3-0003-4066-97bc-6aaf143bbb84_0000000b-4fee-66cf-ac00-343a0d003158",
+ "participantId": "null",
+ "endpointId": "570ea078-74e9-4430-9c67-464ba1fa5859",
+ "endpointType": "VoIP",
+ "mediaType": "Audio",
+ "streamId": "1000",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "82",
+ "roundTripTimeMax": "88",
+ "jitterAvg": "1",
+ "jitterMax": "1",
+ "packetLossRateAvg": "0",
+ "packetLossRateMax": "0"
+}
+```
+Diagnostic log for audio stream from VoIP Endpoint 2 to VoIP Endpoint 1:
+```
+"properties": {
+ "identifier": "acs:7af14122-9ac7-4b81-80a8-4bf3582b42d0_06f9276d-8efe-4bdd-8c22-ebc5434903f0",
+ "participantId": "null",
+ "endpointId": "a5bd82f9-ac38-4f4a-a0fa-bb3467cdcc64",
+ "endpointType": "VoIP",
+ "mediaType": "Audio",
+ "streamId": "1363841599",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "78",
+ "roundTripTimeMax": "84",
+ "jitterAvg": "1",
+ "jitterMax": "1",
+ "packetLossRateAvg": "0",
+ "packetLossRateMax": "0"
+}
+```
+Diagnostic log for video stream from VoIP Endpoint 1 to VoIP Endpoint 2:
+```
+"properties": {
+ "identifier": "acs:61fddbe3-0003-4066-97bc-6aaf143bbb84_0000000b-4fee-66cf-ac00-343a0d003158",
+ "participantId": "null",
+ "endpointId": "570ea078-74e9-4430-9c67-464ba1fa5859",
+ "endpointType": "VoIP",
+ "mediaType": "Video",
+ "streamId": "2804",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "103",
+ "roundTripTimeMax": "143",
+ "jitterAvg": "0",
+ "jitterMax": "4",
+ "packetLossRateAvg": "3.146336E-05",
+ "packetLossRateMax": "0.001769911"
+}
+```
+### Group Call
+In the following example, there are three users in a Group Call, two connected via VOIP, and one connected via PSTN. All are using only Audio.
++
+The data would be generated in three Call Summary Logs and 6 Call Diagnostic Logs.
+
+Shared fields for all logs in the Call:
+```
+"time": "2021-07-05T06:30:06.402Z",
+"resourceId": "SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/ACS-PROD-CCTS-TESTS/PROVIDERS/MICROSOFT.COMMUNICATION/COMMUNICATIONSERVICES/ACS-PROD-CCTS-TESTS",
+"correlationId": "341acde7-8aa5-445b-a3da-2ddadca47d22",
+```
+
+#### Call Summary Logs
+Call Summary Logs have shared operation and category information:
+```
+"operationName": "CallSummary",
+"operationVersion": "1.0",
+"category": "CallSummaryPRIVATEPREVIEW",
+```
+
+Call summary for VoIP Endpoint 1:
+```
+"properties": {
+ "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-729f-ac00-343a0d00d975",
+ "callStartTime": "2021-07-05T06:16:40.240Z",
+ "callDuration": 87,
+ "callType": "Group",
+ "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLTk2ZDUtYTZlM2I2ZjgxOTkw@thread.v2",
+ "participantId": "04cc26f5-a86d-481c-b9f9-7a40be4d6fba",
+ "participantStartTime": "2021-07-05T06:16:44.235Z",
+ "participantDuration": "82",
+ "participantEndReason": "0",
+ "endpointId": "5ebd55df-ffff-ffff-89e6-4f3f0453b1a6",
+ "endpointType": "VoIP",
+ "sdkVersion": "1.0.0.3",
+ "osVersion": "Darwin Kernel Version 18.7.0: Mon Nov 9 15:07:15 PST 2020; root:xnu-4903.272.3~3/RELEASE_ARM64_S5L8960X"
+}
+```
+Call summary for VoIP Endpoint 3:
+```
+"properties": {
+ "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-57c6-ac00-343a0d00d972",
+ "callStartTime": "2021-07-05T06:16:40.240Z",
+ "callDuration": 87,
+ "callType": "Group",
+ "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLTk2ZDUtYTZlM2I2ZjgxOTkw@thread.v2",
+ "participantId": "1a9cb3d1-7898-4063-b3d2-26c1630ecf03",
+ "participantStartTime": "2021-07-05T06:16:40.240Z",
+ "participantDuration": "87",
+ "participantEndReason": "0",
+ "endpointId": "5ebd55df-ffff-ffff-ab89-19ff584890b7",
+ "endpointType": "VoIP",
+ "sdkVersion": "1.0.0.3",
+ "osVersion": "Android 11.0; Manufacturer: Google; Product: redfin; Model: Pixel 5; Hardware: redfin"
+}
+```
+Call summary for PSTN Endpoint 2:
+```
+"properties": {
+ "identifier": "null",
+ "callStartTime": "2021-07-05T06:16:40.240Z",
+ "callDuration": 87,
+ "callType": "Group",
+ "teamsThreadId": "19:meeting_MjZiOTAyN2YtZWU1Yi00ZTZiLTk2ZDUtYTZlM2I2ZjgxOTkw@thread.v2",
+ "participantId": "515650f7-8204-4079-ac9d-d8f4bf07b04c",
+ "participantStartTime": "2021-07-05T06:17:10.447Z",
+ "participantDuration": "52",
+ "participantEndReason": "0",
+ "endpointId": "46387150-692a-47be-8c9d-1237efe6c48b",
+ "endpointType": "PSTN",
+ "sdkVersion": "null",
+ "osVersion": "null"
+}
+```
+#### Call Diagnostic Logs
+Call diagnostics logs share operation information:
+```
+"operationName": "CallDiagnostics",
+"operationVersion": "1.0",
+"category": "CallDiagnosticsPRIVATEPREVIEW",
+```
+Diagnostic log for audio stream from VoIP Endpoint 1 to Server Endpoint:
+```
+"properties": {
+ "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-729f-ac00-343a0d00d975",
+ "participantId": "04cc26f5-a86d-481c-b9f9-7a40be4d6fba",
+ "endpointId": "5ebd55df-ffff-ffff-89e6-4f3f0453b1a6",
+ "endpointType": "VoIP",
+ "mediaType": "Audio",
+ "streamId": "14884",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "46",
+ "roundTripTimeMax": "48",
+ "jitterAvg": "0",
+ "jitterMax": "1",
+ "packetLossRateAvg": "0",
+ "packetLossRateMax": "0"
+}
+```
+Diagnostic log for audio stream from Server Endpoint to VoIP Endpoint 1:
+```
+"properties": {
+ "identifier": null,
+ "participantId": "04cc26f5-a86d-481c-b9f9-7a40be4d6fba",
+ "endpointId": null,
+ "endpointType": "Server",
+ "mediaType": "Audio",
+ "streamId": "2001",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "42",
+ "roundTripTimeMax": "44",
+ "jitterAvg": "1",
+ "jitterMax": "1",
+ "packetLossRateAvg": "0",
+ "packetLossRateMax": "0"
+}
+```
+Diagnostic log for audio stream from VoIP Endpoint 3 to Server Endpoint:
+```
+"properties": {
+ "identifier": "acs:1797dbb3-f982-47b0-b98e-6a76084454f1_0000000b-1531-57c6-ac00-343a0d00d972",
+ "participantId": "1a9cb3d1-7898-4063-b3d2-26c1630ecf03",
+ "endpointId": "5ebd55df-ffff-ffff-ab89-19ff584890b7",
+ "endpointType": "VoIP",
+ "mediaType": "Audio",
+ "streamId": "13783",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "45",
+ "roundTripTimeMax": "46",
+ "jitterAvg": "1",
+ "jitterMax": "2",
+ "packetLossRateAvg": "0",
+ "packetLossRateMax": "0"
+}
+```
+Diagnostic log for audio stream from Server Endpoint to VoIP Endpoint 3:
+```
+"properties": {
+ "identifier": "null",
+ "participantId": "1a9cb3d1-7898-4063-b3d2-26c1630ecf03",
+ "endpointId": null,
+ "endpointType": "Server"
+ "mediaType": "Audio",
+ "streamId": "1000",
+ "transportType": "UDP",
+ "roundTripTimeAvg": "45",
+ "roundTripTimeMax": "46",
+ "jitterAvg": "1",
+ "jitterMax": "4",
+ "packetLossRateAvg": "0",
+```
+
+## Next Steps
+
+- Learn more about [Logging and Diagnostics](./logging-and-diagnostics.md)
communication-services Certified Session Border Controllers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/certified-session-border-controllers.md
# List of Session Border Controllers certified for Azure Communication Services direct routing This document contains a list of Session Border Controllers certified for Azure Communication Services direct routing. It also includes known limitations.
-Microsoft partners with selected Session Border Controllers (SBC) vendors to certify that their SBCs work with Communication Services direct routing.
+Microsoft is working with the selected Session Border Controllers (SBC) vendors certified for Teams Direct Routing to work with Azure direct routing. You can watch the progress on this page. While the SBC, certified for Teams Direct Routing, can work with Azure direct routing, we encourage not to put any workload on the SBC until it appears on this page. We also do not support the uncertified SBC. While Azure direct routing is built on the same backend as Teams Direct Routing, there are some differences. The certification covers comprehensive validation of the SBC for the Azure direct routing.
+ Microsoft works with each vendor to: - Jointly work on the SIP interconnection protocols. - Perform intense tests using a third-party lab. Only devices that pass the tests are certified.
communication-services Sip Interface Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony-sms/sip-interface-infrastructure.md
Locations where only media processors are deployed (SIP flows via the closest da
## Media traffic: Codecs ### Leg between SBC and Cloud Media Processor or Microsoft Teams client.
-Applies to both media bypass case and non-bypass cases.
The Azure direct routing interface on the leg between the Session Border Controller and Cloud Media Processor can use the following codecs:
You can force use of the specific codec on the Session Border Controller by excl
### Leg between Communication Services Calling SDK app and Cloud Media Processor
-On the leg between the Cloud Media Processor and Communication Services Calling SDK app, either SILK or G.722 is used. The codec choice on this leg is based on Microsoft algorithms, which take into consideration multiple parameters.
+On the leg between the Cloud Media Processor and Communication Services Calling SDK app, G.722 is used. Microsoft is working on adding more codecs on this leg.
## Supported Session Border Controllers (SBCs)
data-factory Data Factory Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-service-identity.md
description: Learn about managed identity for Azure Data Factory.
Previously updated : 03/25/2021 Last updated : 07/19/2021
This article helps you understand what a managed identity is for Data Factory (f
## Overview
-When creating a data factory, a managed identity can be created along with factory creation. The managed identity is a managed application registered to Azure Active Directory, and represents this specific data factory.
+Managed identities in data factories eliminate the need for data engineers to manage credentials. Managed identities provide an identity for the Data Factory instance when connecting to resources that support Azure Active Directory (Azure AD) authentication. For example, Data Factory can use a managed identity to access resources like [Azure Key Vault](../key-vault/general/overview.md), where data admins can securely store credentials or access storage accounts. Data Factory uses the managed identity to obtain Azure AD tokens.
-Managed identity for Data Factory benefits the following features:
+There are two types of managed identities supported by Data Factory:
+
+- **System-assigned:** Data factory allows you to enable a managed identity directly on a service instance. When you allow a system-assigned managed identity during the data factory creation, an identity is created in Azure AD tied to that service instance's lifecycle. By design, only that Azure resource can use this identity to request tokens from Azure AD. So when the resource is deleted, Azure automatically deletes the identity for you.
+- **User-assigned:** You may also create a managed identity as a standalone Azure resource. You can [create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) and assign it to one or more instances of a data factory. In user-assigned managed identities, the identity is managed separately from the resources that use it.
+++
+Managed identity for Data Factory provides the below benefits:
- [Store credential in Azure Key Vault](store-credentials-in-key-vault.md), in which case data factory managed identity is used for Azure Key Vault authentication. - Access data stores or computes using managed identity authentication, including Azure Blob storage, Azure Data Explorer, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, Azure SQL Managed Instance, Azure Synapse Analytics, REST, Databricks activity, Web activity, and more. Check the connector and activity articles for details.
+- User-assigned managed identity is also used to encrypt/ decrypt data factory meta-data using the customer-managed key stored in Azure Key Vault, providing double encryption.
+
+## System-assigned managed identity
-## Generate managed identity
+>[!NOTE]
+> System-assigned managed identity is also referred to as 'Managed identity' in the data factory documentations and in data factory UI for backward compatibility purpose. We will explicitly mention 'User-assigned managed identity' when referring to it.
-Managed identity for Data Factory is generated as follows:
+#### <a name="generate-managed-identity"></a> Generate system-assigned managed identity
+
+System-assigned managed identity for Data Factory is generated as follows:
- When creating data factory through **Azure portal or PowerShell**, managed identity will always be created automatically. - When creating data factory through **SDK**, managed identity will be created only if you specify "Identity = new FactoryIdentity()" in the factory object for creation. See example in [.NET quickstart - create data factory](quickstart-create-data-factory-dot-net.md#create-a-data-factory).
Managed identity for Data Factory is generated as follows:
If you find your data factory doesn't have a managed identity associated following [retrieve managed identity](#retrieve-managed-identity) instruction, you can explicitly generate one by updating the data factory with identity initiator programmatically: -- [Generate managed identity using PowerShell](#generate-managed-identity-using-powershell)-- [Generate managed identity using REST API](#generate-managed-identity-using-rest-api)-- [Generate managed identity using an Azure Resource Manager template](#generate-managed-identity-using-an-azure-resource-manager-template)-- [Generate managed identity using SDK](#generate-managed-identity-using-sdk)
+- [Generate managed identity using PowerShell](#generate-system-assigned-managed-identity-using-powershell)
+- [Generate managed identity using REST API](#generate-system-assigned-managed-identity-using-rest-api)
+- [Generate managed identity using an Azure Resource Manager template](#generate-system-assigned-managed-identity-using-an-azure-resource-manager-template)
+- [Generate managed identity using SDK](#generate-system-assigned-managed-identity-using-sdk)
>[!NOTE]
+>
>- Managed identity cannot be modified. Updating a data factory which already have a managed identity won't have any impact, the managed identity is kept unchanged. >- If you update a data factory which already have a managed identity without specifying "identity" parameter in the factory object or without specifying "identity" section in REST request body, you will get an error. >- When you delete a data factory, the associated managed identity will be deleted along.
-### Generate managed identity using PowerShell
+##### Generate system-assigned managed identity using PowerShell
Call **Set-AzDataFactoryV2** command, then you see "Identity" fields being newly generated:
Identity : Microsoft.Azure.Management.DataFactory.Models.FactoryIdentit
ProvisioningState : Succeeded ```
-### Generate managed identity using REST API
+##### Generate system-assigned managed identity using REST API
Call below API with "identity" section in the request body:
PATCH https://management.azure.com/subscriptions/<subsID>/resourceGroups/<resour
} ```
-### Generate managed identity using an Azure Resource Manager template
+##### Generate system-assigned managed identity using an Azure Resource Manager template
**Template**: add "identity": { "type": "SystemAssigned" }.
PATCH https://management.azure.com/subscriptions/<subsID>/resourceGroups/<resour
} ```
-### Generate managed identity using SDK
+##### Generate system-assigned managed identity using SDK
Call the data factory create_or_update function with Identity=new FactoryIdentity(). Sample code using .NET:
Factory dataFactory = new Factory
client.Factories.CreateOrUpdate(resourceGroup, dataFactoryName, dataFactory); ```
-## Retrieve managed identity
+#### <a name="retrieve-managed-identity"></a> Retrieve system-assigned managed identity
You can retrieve the managed identity from Azure portal or programmatically. The following sections show some samples. >[!TIP] > If you don't see the managed identity, [generate managed identity](#generate-managed-identity) by updating your factory.
-### Retrieve managed identity using Azure portal
+#### Retrieve system-assigned managed identity using Azure portal
You can find the managed identity information from Azure portal -> your data factory -> Properties.
The managed identity information will also show up when you create linked servic
When granting permission, in Azure resource's Access Control (IAM) tab -> Add role assignment -> Assign access to -> select Data Factory under System assigned managed identity -> select by factory name; or in general, you can use object ID or data factory name (as managed identity name) to find this identity. If you need to get managed identity's application ID, you can use PowerShell.
-### Retrieve managed identity using PowerShell
+#### Retrieve system-assigned managed identity using PowerShell
The managed identity principal ID and tenant ID will be returned when you get a specific data factory as follows. Use the **PrincipalId** to grant access:
Id : 765ad4ab-XXXX-XXXX-XXXX-51ed985819dc
Type : ServicePrincipal ```
-### Retrieve managed identity using REST API
+#### Retrieve managed identity using REST API
The managed identity principal ID and tenant ID will be returned when you get a specific data factory as follows.
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
} ```
+## User-assigned managed identity
+
+You can create, delete, manage user-assigned managed identities in Azure Active Directory. For more details refer [Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md) documentation.
+
+### Credentials
+
+We are introducing Credentials which can contain user-assigned managed identities, service principals and also lists the system-assigned managed identity that you can use in the linked services that support Azure Active Directory (AAD) authentication. It helps you consolidate and manage all your AAD-based credentials.
+
+Below are the generic steps for using a **user-assigned managed identity** in the linked services for authentication.
+
+1. Associate a user-assigned managed identity to the data factory instance using Azure portal, SDK, PowerShell, REST API.
+ Below screenshot used Azure portal (data factory blade) to associate the user-assigned managed identity.
+
+ :::image type="content" source="media/managed-identities/uami-azure-portal.jpg" alt-text="Screenshot showing how to use Azure portal to associate an user-assigned managed identity." lightbox="media/managed-identities/uami-azure-portal.jpg":::
+
+2. Create a 'Credential' in data factory user interface interactively. You can select the user-assigned managed identity associated with the data factory in Step 1.
+
+ :::image type="content" source="media/managed-identities/credential-adf-ui-create-new-1.png" alt-text="Screenshot showing the first step of creating new credentials." lightbox="media/managed-identities/credential-adf-ui-create-new-1.png":::
+
+ :::image type="content" source="media/managed-identities/credential-adf-ui-create-new-2a.png" alt-text="Screenshot showing the second step of creating new credentials." lightbox="media/managed-identities/credential-adf-ui-create-new-2a.png":::
+
+3. Create a new linked service and select 'user-assigned managed identity' under authentication
+
+ :::image type="content" source="media/managed-identities/credential-adf-ui-create-new-linked-service.png" alt-text="Screenshot showing the new linked service with user-assigned managed identity authentication." lightbox="media/managed-identities/credential-adf-ui-create-new-linked-service.png":::
+
+> [!NOTE]
+> You can use SDK/ PowerShell/ REST APIs for the above actions.
+ ## Next steps+ See the following topics that introduce when and how to use data factory managed identity: - [Store credential in Azure Key Vault](store-credentials-in-key-vault.md)
data-factory Data Flow Expression Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-expression-functions.md
Returns the largest integer not greater than the number.
___ ### <code>fromBase64</code> <code><b>fromBase64(<i>&lt;value1&gt;</i> : string) => string</b></code><br/><br/>
-Encodes the given string in base64.
+Decodes the given base64-encoded string.
* ``fromBase64('Z3VuY2h1cw==') -> 'gunchus'`` ___ ### <code>fromUTC</code>
ddos-protection Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/telemetry.md
In this tutorial, you'll learn how to:
### Metrics
+The metric names present different packet types, and bytes vs. packets, with a basic construct of tag names on each metric as follows:
+
+- **Dropped tag name** (for example, **Inbound Packets Dropped DDoS**): The number of packets dropped/scrubbed by the DDoS protection system.
+- **Forwarded tag name** (for example **Inbound Packets Forwarded DDoS**): The number of packets forwarded by the DDoS system to the destination VIP ΓÇô traffic that was not filtered.
+- **No tag name** (for example **Inbound Packets DDoS**): The total number of packets that came into the scrubbing system ΓÇô representing the sum of the packets dropped and forwarded.
+ > [!NOTE] > While multiple options for **Aggregation** are displayed on Azure portal, only the aggregation types listed in the table below are supported for each metric. We apologize for this confusion and we are working to resolve it.
The following [metrics](../azure-monitor/essentials/metrics-supported.md#microso
## View DDoS protection telemetry
-Telemetry for an attack is provided through Azure Monitor in real time. Telemetry is available only when a public IP address has been under mitigation.
+Telemetry for an attack is provided through Azure Monitor in real time. While [mitigation triggers](#view-ddos-mitigation-policies) for TCP SYN, TCP & UDP are available during peace-time, other telemetry is available only when a public IP address has been under mitigation.
+
+You can view DDoS telemetry for a protected public IP address through three different resource types: DDoS protection plan, virtual network, and public IP address.
+
+### DDoS protection plan
+1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to your DDoS protection plan.
+2. Under **Monitoring**, select **Metrics**.
+3. Select **Scope**. Select the **Subscription** that contains the public IP address you want to log, select **Public IP Address** for **Resource type**, then select the specific public IP address you want to log metrics for, and then select **Apply**.
+4. Select the **Aggregation** type as **Max**.
-1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to your DDoS Protection Plan.
+### Virtual network
+1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to your virtual network that has DDoS protection enabled.
2. Under **Monitoring**, select **Metrics**. 3. Select **Scope**. Select the **Subscription** that contains the public IP address you want to log, select **Public IP Address** for **Resource type**, then select the specific public IP address you want to log metrics for, and then select **Apply**. 4. Select the **Aggregation** type as **Max**.
+5. Select **Add filter**. Under **Property**, select **Protected IP Address**, and the operator should be set to **=**. Under **Values**, you will see a dropdown of public IP addresses, associated with the the virtual network, that are protected by DDoS protection enabled.
-The metric names present different packet types, and bytes vs. packets, with a basic construct of tag names on each metric as follows:
+![DDoS Diagnostic Settings](./media/ddos-attack-telemetry/vnet-ddos-metrics.png)
-- **Dropped tag name** (for example, **Inbound Packets Dropped DDoS**): The number of packets dropped/scrubbed by the DDoS protection system.-- **Forwarded tag name** (for example **Inbound Packets Forwarded DDoS**): The number of packets forwarded by the DDoS system to the destination VIP ΓÇô traffic that was not filtered.-- **No tag name** (for example **Inbound Packets DDoS**): The total number of packets that came into the scrubbing system ΓÇô representing the sum of the packets dropped and forwarded.
+### Public IP address
+1. Sign in to the [Azure portal](https://portal.azure.com/) and browse to your public IP address.
+2. Under **Monitoring**, select **Metrics**.
+3. Select the **Aggregation** type as **Max**.
## View DDoS mitigation policies
-DDoS Protection Standard applies three auto-tuned mitigation policies (TCP SYN, TCP & UDP) for each public IP address of the protected resource, in the virtual network that has DDoS enabled. You can view the policy thresholds by selecting the **Inbound TCP packets to trigger DDoS mitigation** and **Inbound UDP packets to trigger DDoS mitigation** metrics with **aggregation** type as 'Max', as shown in the following picture:
+DDoS Protection Standard applies three auto-tuned mitigation policies (TCP SYN, TCP & UDP) for each public IP address of the protected resource, in the virtual network that has DDoS protection enabled. You can view the policy thresholds by selecting the **Inbound TCP packets to trigger DDoS mitigation** and **Inbound UDP packets to trigger DDoS mitigation** metrics with **aggregation** type as 'Max', as shown in the following picture:
![View mitigation policies](./media/manage-ddos-protection/view-mitigation-policies.png)
-Policy thresholds are auto-configured via Azure machine learning-based network traffic profiling. Only when the policy threshold is breached does DDoS mitigation occur for the IP address under attack.
- ## Validate and test To simulate a DDoS attack to validate DDoS protection telemetry, see [Validate DDoS detection](test-through-simulations.md).
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/how-to-set-up-your-network.md
Title: Set up your network description: Learn about solution architecture, network preparation, prerequisites, and other information needed to ensure that you successfully set up your network to work with Azure Defender for IoT appliances. Previously updated : 02/18/2021 Last updated : 07/25/2021
Record site information such as:
- DNS servers (optional). Prepare your DNS server's IP and host name.
-For a detailed list and description of important site information, see [Example site book](#example-site-book).
+For a detailed list and description of important site information, see [Predeployment checklist](#predeployment-checklist).
#### Successful monitoring guidelines
Use these sections for troubleshooting issues:
For any other issues, contact [Microsoft Support](https://support.microsoft.com/en-us/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099).
-## Example site book
+## Predeployment checklist
-Use the example site book to retrieve and review important information that you need for network setup.
+Use the predeployment checklist to retrieve and review important information that you need for network setup.
### Site checklist
Review this list before site deployment:
| 4 | Provide a list of switch models in the network. | ΓÿÉ | | | 5 | Provide a list of vendors and protocols of the industrial equipment. | ΓÿÉ | | | 6 | Provide network details for sensors (IP address, subnet, D-GW, DNS). | ΓÿÉ | |
-| 7 | Create necessary firewall rules and the access list. | ΓÿÉ | |
-| 8 | Create spanning ports on switches for port monitoring, or configure network taps as desired. | ΓÿÉ | |
-| 9 | Prepare rack space for sensor appliances. | ΓÿÉ | |
-| 10 | Prepare a workstation for personnel. | ΓÿÉ | |
-| 11 | Provide a keyboard, monitor, and mouse for the Defender for IoT rack devices. | ΓÿÉ | |
-| 12 | Rack and cable the appliances. | ΓÿÉ | |
-| 13 | Allocate site resources to support deployment. | ΓÿÉ | |
-| 14 | Create Active Directory groups or local users. | ΓÿÉ | |
-| 15 | Set-up training (self-learning). | ΓÿÉ | |
-| 16 | Go or no-go. | ΓÿÉ | |
-| 17 | Schedule the deployment date. | ΓÿÉ | |
+| 7 | Third-party switch management | ΓÿÉ | |
+| 8 | Create necessary firewall rules and the access list. | ΓÿÉ | |
+| 9 | Create spanning ports on switches for port monitoring, or configure network taps as desired. | ΓÿÉ | |
+| 10 | Prepare rack space for sensor appliances. | ΓÿÉ | |
+| 11 | Prepare a workstation for personnel. | ΓÿÉ | |
+| 12 | Provide a keyboard, monitor, and mouse for the Defender for IoT rack devices. | ΓÿÉ | |
+| 13 | Rack and cable the appliances. | ΓÿÉ | |
+| 14 | Allocate site resources to support deployment. | ΓÿÉ | |
+| 15 | Create Active Directory groups or local users. | ΓÿÉ | |
+| 16 | Set-up training (self-learning). | ΓÿÉ | |
+| 17 | Go or no-go. | ΓÿÉ | |
+| 18 | Schedule the deployment date. | ΓÿÉ | |
| **Date** | **Note** | **Deployment date** | **Note** |
Review this list before site deployment:
An overview of the industrial network diagram will allow you to define the proper location for the Defender for IoT equipment.
-1. View a global network diagram of the industrial OT environment. For example:
+1. **Global network diagram** - View a global network diagram of the industrial OT environment. For example:
- :::image type="content" source="media/how-to-set-up-your-network/ot-global-network-diagram.png" alt-text="Diagram of the industrial OT environment for the global network.":::
+ :::image type="content" source="media/how-to-set-up-your-network/backbone-switch.png" alt-text="Diagram of the industrial OT environment for the global network.":::
> [!NOTE] > The Defender for IoT appliance should be connected to a lower-level switch that sees the traffic between the ports on the switch.
-2. Provide the approximate number of network devices that will be monitored. You will need this information when onboarding your subscription to the Azure Defender for IoT portal. During the onboarding process, you will be prompted to enter the number of devices in increments of 1000.
+1. **Committed devices** - Provide the approximate number of network devices that will be monitored. You will need this information when onboarding your subscription to the Azure Defender for IoT portal. During the onboarding process, you will be prompted to enter the number of devices in increments of 1000.
-3. Provide a subnet list for the production networks and a description (optional).
+1. **(Optional) Subnet list** - Provide a subnet list for the production networks and a description (optional).
| **#** | **Subnet name** | **Description** | |--| | |
An overview of the industrial network diagram will allow you to define the prope
| 3 | | | 4 | |
-4. Provide a VLAN list of the production networks.
+1. **VLANs** - Provide a VLAN list of the production networks.
| **#** | **VLAN Name** | **Description** | |--|--|--|
An overview of the industrial network diagram will allow you to define the prope
| 3 | | | | 4 | | |
-5. To verify that the switches have port mirroring capability, provide the switch model numbers that the Defender for IoT platform should connect to:
+1. **Switch models and mirroring support** - To verify that the switches have port mirroring capability, provide the switch model numbers that the Defender for IoT platform should connect to:
| **#** | **Switch** | **Model** | **Traffic mirroring support (SPAN, RSPAN, or none)** | |--|--|--|--|
An overview of the industrial network diagram will allow you to define the prope
| 3 | | | | 4 | | |
- Does a third party manage the switches? Y or N
+1. **Third-party switch management** - Does a third party manage the switches? Y or N
If yes, who? __________________________________
An overview of the industrial network diagram will allow you to define the prope
- Emerson ΓÇô DeltaV, Ovation
-6. Are there devices that communicate via a serial connection in the network? Yes or No
+1. **Serial connection** - Are there devices that communicate via a serial connection in the network? Yes or No
If yes, specify which serial communication protocol: ________________ If yes, mark on the network diagram what devices communicate with serial protocols, and where they are:
- <Add your network diagram with marked serial connection>
+ *Add your network diagram with marked serial connection*
-7. For Quality of Service (QoS), the default setting of the sensor is 1.5 Mbps. Specify if you want to change it: ________________
+1. **Quality of Service** - For Quality of Service (QoS), the default setting of the sensor is 1.5 Mbps. Specify if you want to change it: ________________
Business unit (BU): ________________
-### Specifications for site equipment
+1. **Sensor** - Specifications for site equipment
-#### Network
-
-The sensor appliance is connected to switch SPAN port through a network adapter. It's connected to the customer's corporate network for management through another dedicated network adapter.
-
-Provide address details for the sensor NIC that will be connected in the corporate network:
-
-| Item | Appliance 1 | Appliance 2 | Appliance 3 |
-| | - | - | - |
-| Appliance IP address | | | |
-| Subnet | | | |
-| Default gateway | | | |
-| DNS | | | |
-| Host name | | | |
-
-#### iDRAC/iLO/Server management
-
-| Item | Appliance 1 | Appliance 2 | Appliance 3 |
-| | - | - | - |
-| Appliance IP address | | | |
-| Subnet | | | |
-| Default gateway | | | |
-| DNS | | | |
-
-#### On-premises management console
-
-| Item | Active | Passive (when using HA) |
-| | | -- |
-| IP address | | |
-| Subnet | | |
-| Default gateway | | |
-| DNS | | |
-
-#### SNMP
-
-| Item | Details |
-| | |
-| IP | |
-| IP address | |
-| Username | |
-| Password | |
-| Authentication type | MD5 or SHA |
-| Encryption | DES or AES |
-| Secret key | |
-| SNMP v2 community string |
-
-### On-premises management console SSL certificate
-
-Are you planning to use an SSL certificate? Yes or No
-
-If yes, what service will you use to generate it? What attributes will you include in the certificate (for example, domain or IP address)?
-
-### SMTP authentication
-
-Are you planning to use SMTP to forward alerts to an email server? Yes or No
+ The sensor appliance is connected to switch SPAN port through a network adapter. It's connected to the customer's corporate network for management through another dedicated network adapter.
+
+ Provide address details for the sensor NIC that will be connected in the corporate network:
+
+ | Item | Appliance 1 | Appliance 2 | Appliance 3 |
+ |--|--|--|--|
+ | Appliance IP address | | | |
+ | Subnet | | | |
+ | Default gateway | | | |
+ | DNS | | | |
+ | Host name | | | |
-If yes, what authentication method you will use?
+1. **iDRAC/iLO/Server management**
-### Active Directory or local users
+ | Item | Appliance 1 | Appliance 2 | Appliance 3 |
+ |--|--|--|--|
+ | Appliance IP address | | | |
+ | Subnet | | | |
+ | Default gateway | | | |
+ | DNS | | | |
-Contact an Active Directory administrator to create an Active Directory site user group or create local users. Be sure to have your users ready for the deployment day.
+1. **On-premises management console**
-### IoT device types in the network
+ | Item | Active | Passive (when using HA) |
+ |--|--|--|
+ | IP address | | |
+ | Subnet | | |
+ | Default gateway | | |
+ | DNS | | |
+
+1. **SNMP**
+
+ | Item | Details |
+ |--|--|
+ | IP | |
+ | IP address | |
+ | Username | |
+ | Password | |
+ | Authentication type | MD5 or SHA |
+ | Encryption | DES or AES |
+ | Secret key | |
+ | SNMP v2 community string |
+
+1. **On-premises management console SSL certificate**
+
+ Are you planning to use an SSL certificate? Yes or No
+
+ If yes, what service will you use to generate it? What attributes will you include in the certificate (for example, domain or IP address)?
-| Device type | Number of devices in the network | Average bandwidth |
-| | | -- |
-| Camera | |
-| X-ray machine | |
+1. **SMTP authentication**
-## See also
+ Are you planning to use SMTP to forward alerts to an email server? Yes or No
+
+ If yes, what authentication method you will use?
+
+1. **Active Directory or local users**
+
+ Contact an Active Directory administrator to create an Active Directory site user group or create local users. Be sure to have your users ready for the deployment day.
+
+1. IoT device types in the network
+
+ | Device type | Number of devices in the network | Average bandwidth |
+ | | | -- |
+ | Camera | |
+ | X-ray machine | |
+ | | |
+ | | |
+ | | |
+ | | |
+ | | |
+ | | |
+ | | |
+ | | |
+
+## Next steps
[About the Defender for IoT installation](how-to-install-software.md)
defender-for-iot References Horizon Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/references-horizon-sdk.md
Develop dissector plugins without:
- violating compliance regulations.
+Contact <ms-horizon-support@microsoft.com> for information about developing protocol plugins.
## Customization and localization The SDK supports various customization options, including:
Defender for IoT provides basic dissectors for common protocols. You can build y
This kit contains the header files needed for development. The development process requires basic steps and optional advanced steps, described in this SDK.
-Contact ms-horizon-support@microsoft.com for information on receiving header files and other resources.
+Contact <ms-horizon-support@microsoft.com> for information on receiving header files and other resources.
## About the environment and setup
defender-for-iot Resources Manage Proprietary Protocols https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/resources-manage-proprietary-protocols.md
Use the Horizon SDK to design dissector plugins that decode network traffic so i
Protocol dissectors are developed as external plugins and are integrated with an extensive range of Defender for IoT services, for example services that provide monitoring, alerting, and reporting capabilities.
-For information on working with the SDK, contact support.microsoft.com.
+Contact <ms-horizon-support@microsoft.com> for details about working with the Open Development Environment (ODE) SDK and creating protocol plugins.
Once the plugin is developed, you can use Horizon web console to:
To upload:
2. Drag or browse to your plugin. If the upload fails, an error message will be presented.
-Contact your support at [support.microsoft.com](https://support.microsoft.com/en-us/supportforbusiness/productselection?sapId=82c88f35-1b8e-f274-ec11-c6efdd6dd099) for details about working with the Open Development Environment (ODE) SDK.
+Contact <ms-horizon-support@microsoft.com> for details about working with the Open Development Environment (ODE) SDK and creating protocol plugins.
## Enable and disable plugins
defender-for-iot Resources Sensor Deployment Checklist https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/organizations/resources-sensor-deployment-checklist.md
- Title: Azure Defender for IoT pre-deployment checklist
-description: This article provides information, and a checklist that should be used prior to deployment when preparing your site.
Previously updated : 07/18/2021---
-# Pre-deployment checklist overview
-
-This article provides information, and a checklist that should be used prior to deployment when preparing your site to ensure a successful onboarding.
--- The Defender for IoT physical sensor should connect to managed switches that see the industrial communications between layers 1 and 2 (in some cases also layer 3).-- The sensor listens on a switch Mirror port (SPAN port) or a TAP.-- The management port is connected to the business/corporate network using SSL.-
-## Checklist
-
-Having an overview of an industrial network diagram, will allow the site engineers to define the proper location for Azure Defender for IoT equipment.
-
-### 1. Global network diagram
-
-The global network diagram provides a diagram of the industrial OT environment
---
-> [!Note]
-> The Defender for IoT appliance should be connected to a lower-level switch that sees the traffic between the ports on the switch.
-
-### 2. Committed devices
-
-Provide the approximate number of network devices that will be monitored. You will need this information when onboarding your subscription to the Azure Defender for IoT portal. During the onboarding process, you will be prompted to enter the number of devices in increments of 1000.
-
-### 3. (Optional) Subnet list
-
-Provide a subnet list of the production networks.
-
-| **#** | **Subnet name** | **Description** |
-|--|--|--|
-| 1 | | |
-| 2 | | |
-| 3 | | |
-| 4 | | |
-
-### 4. VLANs
-
-Provide a VLAN list of the production networks.
-
-| **#** | **VLAN Name** | **Description** |
-|--|--|--|
-| 1 | | |
-| 2 | | |
-| 3 | | |
-| 4 | | |
-
-### 5. Switch models and mirroring support
-
-To verify that the switches have port mirroring capability, provide the switch model numbers that the Defender for IoT platform should connect to.
-
-| **#** | **Switch** | **Model** | **Traffic mirroring support (SPAN, RSPAN, or none)** |
-|--|--|--|--|
-| 1 | | |
-| 2 | | |
-| 3 | | |
-| 4 | | |
-
-### 6. Third-party switch management
-
-Does a third party manage the switches? Y or N
-
-If yes, who? __________________________________
-
-What is their policy? __________________________________
-
-### 7. Serial connection
-
-Are there devices that communicate via a serial connection in the network? Yes or No
-
-If yes, specify which serial communication protocol: ________________
-
-If yes, indicate on the network diagram what devices communicate with serial protocols, and where they are.
-
-*Add your network diagram with marked serial connections.*
-
-### 8. Vendors and protocols (industrial equipment)
-
-Provide a list of vendors and protocols of the industrial equipment. (Optional)
-
-| **#** | **Vendor** | **Communication protocol** |
-|--|--|--|
-| 1 | | |
-| 2 | | |
-| 3 | | |
-| 4 | | |
-
-For example:
--- Siemens--- Rockwell automation ΓÇô Ethernet or IP--- Emerson ΓÇô DeltaV, Ovation-
-### 9. QoS
-
-For QoS, the default setting of the sensor is 1.5 Mbps. Specify if you want to change it: ________________
-
- Business unit (BU): ________________
-
-### 10. Sensor
-
-The sensor appliance is connected to switch SPAN port through a network adapter. It's connected to the customer's corporate network for management through another dedicated network adapter.
-
-Provide address details for the sensor NIC that will be connected in the corporate network:
-
-| Item | Appliance 1 | Appliance 2 | Appliance 3 |
-|--|--|--|--|
-| Appliance IP address | | | |
-| Subnet | | | |
-| Default gateway | | | |
-| DNS | | | |
-| Host name | | | |
-
-### 11. iDRAC/iLO/Server management
-
-| Item | Appliance 1 | Appliance 2 | Appliance 3 |
-|--|--|--|--|
-| Appliance IP address | | | |
-| Subnet | | | |
-| Default gateway | | | |
-| DNS | | | |
-
-### 12. On-premises management console
-
-| Item | Active | Passive (when using HA) |
-|--|--|--|
-| IP address | | |
-| Subnet | | |
-| Default gateway | | |
-| DNS | | |
-
-### 13. SNMP
-
-| Item | Details |
-|--|--|
-| IP | |
-| IP address | |
-| Username | |
-| Password | |
-| Authentication type | MD5 or SHA |
-| Encryption | DES or AES |
-| Secret key | |
-| SNMP v2 community string |
-
-### 14. SSL certificate
-
-Are you planning to use an SSL certificate? Yes or No
-
-If yes, what service will you use to generate it? What attributes will you include in the certificate (for example, domain or IP address)?
-
-### 15. SMTP authentication
-
-Are you planning to use SMTP to forward alerts to an email server? Yes or No
-
-If yes, what authentication method you will use?
-
-### 16. Active Directory or local users
-
-Contact an Active Directory administrator to create an Active Directory site user group or create local users. Be sure to have your users ready for the deployment day.
-
-### 17. IoT device types in the network
-
-| Device type | Number of devices in the network | Average bandwidth |
-|--|--|--|
-| Ex. Camera | |
-| EX. X-ray machine | |
-| | |
-| | |
-| | |
-| | |
-| | |
-| | |
-| | |
-| | |
-
-## Next steps
-
-[About Azure Defender for IoT network setup](how-to-set-up-your-network.md)
-
-[About the Defender for IoT installation](how-to-install-software.md)
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-managed-instance-online.md
Last updated 08/04/2020
# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online using DMS
-You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to an [Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md) with minimal downtime. For additional methods that may require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](../azure-sql/managed-instance/migrate-to-instance-from-sql-server.md).
+You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to an [Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md) with minimal downtime. For additional methods that may require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](../azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
In this tutorial, you migrate the **Adventureworks2012** database from an on-premises instance of SQL Server to a SQL Managed Instance with minimal downtime by using Azure Database Migration Service.
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/dms/tutorial-sql-server-to-managed-instance.md
Last updated 01/08/2020
# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance offline using DMS
-You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to an [Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md). For additional methods that may require some manual effort, see the article [SQL Server instance migration to SQL Managed Instance](../azure-sql/managed-instance/migrate-to-instance-from-sql-server.md).
+You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to an [Azure SQL Managed Instance](../azure-sql/managed-instance/sql-managed-instance-paas-overview.md). For additional methods that may require some manual effort, see the article [SQL Server to Azure SQL Managed Instance](../azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide.md).
In this tutorial, you migrate the **Adventureworks2012** database from an on-premises instance of SQL Server to a SQL Managed Instance by using Azure Database Migration Service.
iot-hub Iot Hub Live Data Visualization In Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-live-data-visualization-in-power-bi.md
arduino Previously updated : 6/08/2020 Last updated : 7/23/2021
-# Visualize real-time sensor data from Azure IoT Hub using Power BI
+# Tutorial: Visualize real-time sensor data from Azure IoT Hub using Power BI
-![End-to-end diagram](./media/iot-hub-live-data-visualization-in-power-bi/end-to-end-diagram.png)
+You can use Microsoft Power BI to visualize real-time sensor data that your Azure IoT hub receives. To do so, you configure an Azure Stream Analytics job to consume the data from IoT Hub and route it to a dataset in Power BI.
-In this article, you learn how to visualize real-time sensor data that your Azure IoT hub receives by using Power BI. If you want to try to visualize the data in your IoT hub with a web app, see [Use a web app to visualize real-time sensor data from Azure IoT Hub](iot-hub-live-data-visualization-in-web-apps.md).
+ [Microsoft Power BI](https://powerbi.microsoft.com/) is a data visualization tool that you can use to perform self-service and enterprise business intelligence (BI) over large data sets. [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/#overview) is a fully managed, real-time analytics service designed to help you analyze and process fast moving streams of data that can be used to get insights, build reports or trigger alerts and actions.
+
+In this tutorial, you perform the following tasks:
+
+> [!div class="checklist"]
+> * Create a consumer group on your IoT hub.
+> * Create and configure an Azure Stream Analytics job to read temperature telemetry from your consumer group and send it to Power BI.
+> * Create a report of the temperature data in Power BI and share it to the web.
## Prerequisites
-* Complete the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) tutorial or one of the device tutorials. For example, you can go to [Raspberry Pi with node.js](iot-hub-raspberry-pi-kit-node-get-started.md) or to one of the [Send telemetry](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) quickstarts. These articles cover the following requirements:
+* Complete the one of the [Send telemetry](quickstart-send-telemetry-dotnet.md) quickstarts in the development language of your choice. Alternatively, you can use any device app that sends temperature telemetry; for example, the [Raspberry Pi online simulator](iot-hub-raspberry-pi-web-simulator-get-started.md) or one of the [Embedded device](/azure/iot-develop/quickstart-devkit-mxchip-az3166) quickstarts. These articles cover the following requirements:
* An active Azure subscription. * An Azure IoT hub in your subscription.
- * A client application that sends messages to your Azure IoT hub.
+ * A client app that sends messages to your Azure IoT hub.
* A Power BI account. ([Try Power BI for free](https://powerbi.microsoft.com/))
Let's start by creating a Stream Analytics job. After you create the job, you de
### Create a Stream Analytics job
-1. In the [Azure portal](https://portal.azure.com), select **Create a resource** > **Internet of Things** > **Stream Analytics job**.
+1. In the [Azure portal](https://portal.azure.com), select **Create a resource**. Type *Stream Analytics Job* in the search box and select it from the drop-down list. On the **Stream Analytics job** overview page, select **Create**
2. Enter the following information for the job.
Let's start by creating a Stream Analytics job. After you create the job, you de
**Location**: Use the same location as your resource group.
- ![Create a Stream Analytics job in Azure](./media/iot-hub-live-data-visualization-in-power-bi/create-stream-analytics-job.png)
+ :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/create-stream-analytics-job.png" alt-text="Create a Stream Analytics job in Azure":::
3. Select **Create**.
Let's start by creating a Stream Analytics job. After you create the job, you de
**Shared access policy name**: Select the name of the shared access policy you want the Stream Analytics job to use for your IoT hub. For this tutorial, you can select *service*. The *service* policy is created by default on new IoT hubs and grants permission to send and receive on cloud-side endpoints exposed by the IoT hub. To learn more, see [Access control and permissions](iot-hub-dev-guide-sas.md#access-control-and-permissions).
- **Shared access policy key**: This field is auto-filled based on your selection for the shared access policy name.
+ **Shared access policy key**: This field is autofilled based on your selection for the shared access policy name.
**Consumer group**: Select the consumer group you created previously. Leave all other fields at their defaults.
- ![Add an input to a Stream Analytics job in Azure](./media/iot-hub-live-data-visualization-in-power-bi/add-input-to-stream-analytics-job.png)
+ :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/add-input-to-stream-analytics-job.png" alt-text="Add an input to a Stream Analytics job in Azure":::
4. Select **Save**.
Let's start by creating a Stream Analytics job. After you create the job, you de
1. Under **Job topology**, select **Outputs**.
-2. In the **Outputs** pane, select **Add** and **Power BI**.
+2. In the **Outputs** pane, select **Add**, and then select **Power BI** from the drop-down list.
3. On the **Power BI - New output** pane, select **Authorize** and follow the prompts to sign in to your Power BI account.
Let's start by creating a Stream Analytics job. After you create the job, you de
**Authentication mode**: Leave at the default.
- ![Add an output to a Stream Analytics job in Azure](./media/iot-hub-live-data-visualization-in-power-bi/add-output-to-stream-analytics-job.png)
+ :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/add-output-to-stream-analytics-job.png" alt-text="Add an output to a Stream Analytics job in Azure":::
5. Select **Save**.
Let's start by creating a Stream Analytics job. After you create the job, you de
3. Replace `[YourOutputAlias]` with the output alias of the job.
- ![Add a query to a Stream Analytics job in Azure](./media/iot-hub-live-data-visualization-in-power-bi/add-query-to-stream-analytics-job.png)
+1. Add the following `WHERE` clause as the last line of the query. This line ensures that only messages with a **temperature** property will be forwarded to Power BI.
+
+ ```sql
+ WHERE temperature IS NOT NULL
+ ```
+1. Your query should look similar to the following screenshot. Select **Save query**.
-4. Select **Save query**.
+ :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/add-query-to-stream-analytics-job.png" alt-text="Add a query to a Stream Analytics job":::
### Run the Stream Analytics job In the Stream Analytics job, select **Overview**, then select **Start** > **Now** > **Start**. Once the job successfully starts, the job status changes from **Stopped** to **Running**.
-![Run a Stream Analytics job in Azure](./media/iot-hub-live-data-visualization-in-power-bi/run-stream-analytics-job.png)
## Create and publish a Power BI report to visualize the data The following steps show you how to create and publish a report using the Power BI service. You can follow these steps, with some modification, if you want to use the "new look" in Power BI. To understand the differences and how to navigate in the "new look", see [The 'new look' of the Power BI service](/power-bi/consumer/service-new-look).
-1. Ensure the sample application is running on your device. If not, you can refer to the tutorials under [Setup your device](./iot-hub-raspberry-pi-kit-node-get-started.md).
+1. Make sure the client app is running on your device.
-2. Sign in to your [Power BI](https://powerbi.microsoft.com/en-us/) account.
+2. Sign in to your [Power BI](https://powerbi.microsoft.com/) account and select **Power BI service** from the top menu.
-3. Select the workspace you used, **My Workspace**.
+3. Select the workspace you used from the side menu, **My Workspace**.
-4. Select **Datasets**.
+4. Under the **All** tab or the **Datasets + dataflows** tab, you should see the dataset that you specified when you created the output for the Stream Analytics job.
- You should see the dataset that you specified when you created the output for the Stream Analytics job.
+5. Hover over the dataset you created, select **More options** menu (the three dots to the right of the dataset name), and then select **Create report**.
-5. For the dataset you created, select **Add Report** (the first icon to the right of the dataset name).
-
- ![Create a Microsoft Power BI report](./media/iot-hub-live-data-visualization-in-power-bi/power-bi-create-report.png)
+ :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/power-bi-create-report.png" alt-text="Create a Microsoft Power BI report":::
6. Create a line chart to show real-time temperature over time.
- 1. On the **Visualizations** pane of the report creation page, select the line chart icon to add a line chart.
+ 1. On the **Visualizations** pane of the report creation page, select the line chart icon to add a line chart. Use the guides located on the sides and corners of the chart to adjust its size and position.
2. On the **Fields** pane, expand the table that you specified when you created the output for the Stream Analytics job.
The following steps show you how to create and publish a report using the Power
A line chart is created. The x-axis displays date and time in the UTC time zone. The y-axis displays temperature from the sensor.
- ![Add a line chart for temperature to a Microsoft Power BI report](./media/iot-hub-live-data-visualization-in-power-bi/power-bi-add-temperature.png)
-
-7. Create another line chart to show real-time humidity over time. To do this, click on a blank part of the canvas and follow the same steps above to place **EventEnqueuedUtcTime** on the x-axis and **humidity** on the y-axis.
+ :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/power-bi-add-temperature.png" alt-text="Add a line chart for temperature to a Microsoft Power BI report":::
- ![Add a line chart for humidity to a Microsoft Power BI report](./media/iot-hub-live-data-visualization-in-power-bi/power-bi-add-humidity.png)
+ > [!NOTE]
+ > Depending on the device or simulated device that you use to send telemetry data, you may have a slightly different list of fields.
+ >
-8. Select **Save** to save the report.
+8. Select **Save** to save the report. When prompted, enter a name for your report. When prompted for a sensitivity label, you can select **Public** and then select **Save**.
-9. Select **Reports** on the left pane, and then select the report that you just created.
+10. Still on the report pane, select **File** > **Embed report** > **Website or portal**.
-10. Select **File** > **Publish to web**.
-
- ![Select publish to web for the Microsoft Power BI report](./media/iot-hub-live-data-visualization-in-power-bi/power-bi-select-publish-to-web.png)
+ :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/power-bi-select-embed-report.png" alt-text="Select embed report website for the Microsoft Power BI report":::
> [!NOTE] > If you get a notification to contact your administrator to enable embed code creation, you may need to contact them. Embed code creation must be enabled before you can complete this step. >
- > ![Contact your administrator notification](./media/iot-hub-live-data-visualization-in-power-bi/contact-admin.png)
+ > :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/contact-admin.png" alt-text="Contact your administrator notification":::
++
+11. You're provided the report link that you can share with anyone for report access and a code snippet that you can use to integrate the report into a blog or website. Copy the link in the **Secure embed code** window and then close the window.
+
+ :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/copy-secure-embed-code.png" alt-text="Copy the embed report link":::
-11. Select **Create embed code**, and then select **Publish**.
+12. Open a web browser and paste the link into the address bar.
-You're provided the report link that you can share with anyone for report access and a code snippet that you can use to integrate the report into your blog or website.
+ :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/power-bi-web-output.png" alt-text="Publish a Microsoft Power BI report":::
-![Publish a Microsoft Power BI report](./media/iot-hub-live-data-visualization-in-power-bi/power-bi-web-output.png)
+Microsoft also offers the [Power BI mobile apps](https://powerbi.microsoft.com/documentation/powerbi-power-bi-apps-for-mobile-devices/) for viewing and interacting with your Power BI dashboards and reports on your mobile device.
-Microsoft also offers the [Power BI mobile apps](https://powerbi.microsoft.com/en-us/documentation/powerbi-power-bi-apps-for-mobile-devices/) for viewing and interacting with your Power BI dashboards and reports on your mobile device.
+## Cleanup resources
+
+In this tutorial, you've created a resource group, an IoT hub, a Stream Analytics job, and a dataset in Power BI.
+
+If you plan to complete other tutorials, you may want to leave the resource group and IoT hub and reuse them later.
+
+If you don't need the IoT hub or the other resources you created any longer, you can delete the resource group in the portal. To do so, select the resource group and then select **Delete resource group**. If you want to keep the IoT hub, you can delete other resources from the **Overview** pane of the resource group. To do so, right-click the resource, select **Delete** from the context menu, and follow the prompts.
+
+### Use the Azure CLI to clean up Azure resources
+
+To remove the resource group and all of its resources, use the [az group delete](/cli/azure/group#az_group_delete) command.
+
+```azurecli-interactive
+az group delete --name {your resource group}
+```
+
+### Clean up Power BI resources
+
+You created a dataset, **PowerBiVisualizationDataSet**, in Power BI. To remove it, sign in to your [Power BI](https://powerbi.microsoft.com/) account. On the left-hand menu under **Workspaces**, select **My workspace**. In the list of datasets under the **DataSets + dataflows** tab, hover over the **PowerBiVisualizationDataSet** dataset. Select the three vertical dots that appear to the right of the dataset name to open the **More options** menu, then select **Delete** and follow the prompts. When you remove the dataset, the report is removed as well.
## Next steps
-YouΓÇÖve successfully used Power BI to visualize real-time sensor data from your Azure IoT hub.
+In this tutorial, you learned how to use Power BI to visualize real-time sensor data from your Azure IoT hub by performing the following tasks:
+
+> [!div class="checklist"]
+> * Create a consumer group on your IoT hub.
+> * Create and configure an Azure Stream Analytics job to read temperature telemetry from your consumer group and send it to Power BI.
+> * Configure a report for the temperature data in Power BI and share it to the web.
-For another way to visualize data from Azure IoT Hub, see [Use a web app to visualize real-time sensor data from Azure IoT Hub](iot-hub-live-data-visualization-in-web-apps.md).
+For another way to visualize data from Azure IoT Hub, see the following article.
+> [!div class="nextstepaction"]
+> [Use a web app to visualize real-time sensor data from Azure IoT Hub](iot-hub-live-data-visualization-in-web-apps.md).
machine-learning How To Run Jupyter Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-run-jupyter-notebooks.md
Using the following keystroke shortcuts, you can more easily navigate and run co
* When compute instance is deployed in a workspace with a private endpoint, it can be only be [accessed from within virtual network](./how-to-secure-training-vnet.md#compute-instance). If you are using custom DNS or hosts file please add an entry for < instance-name >.< region >.instances.azureml.ms with private IP address of workspace private endpoint. For more information see the [custom DNS](./how-to-custom-dns.md?tabs=azure-cli) article.
-* If your kernel crashed and was restarted you can run the following command to look at jupyter log and find out more details. `sudo journalctl -u jupyter`. if kernel issues persist, please consider using a compute instance with more memory.
+* If your kernel crashed and was restarted, you can run the following command to look at jupyter log and find out more details: `sudo journalctl -u jupyter`. If kernel issues persist, consider using a compute instance with more memory.
+
+* If you run into an expired token issue, sign out of your Azure ML studio, sign back in, and then restart the notebook kernel.
## Next steps
mysql Concepts Data Encryption Mysql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-data-encryption-mysql.md
Data encryption with customer-managed keys for Azure Database for MySQL, is set
Key Vault is a cloud-based, external key management system. It's highly available and provides scalable, secure storage for RSA cryptographic keys, optionally backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs). It doesn't allow direct access to a stored key, but does provide services of encryption and decryption to authorized entities. Key Vault can generate the key, import it, or [have it transferred from an on-premises HSM device](../key-vault/keys/hsm-protected-keys.md). > [!NOTE]
-> This feature is available in all Azure regions where Azure Database for MySQL supports "General Purpose" and "Memory Optimized" pricing tiers. For other limitations, refer to the [limitation](concepts-data-encryption-mysql.md#limitations) section.
+> This feature is supported only on "General Purpose storage v2 (support up to 16TB)" storage available in General Purpose and Memory Optimized pricing tiers. Refer [Storage concepts](concepts-pricing-tiers.md#storage) for more details. For other limitations, refer to the [limitation](concepts-data-encryption-mysql.md#limitations) section.
## Benefits
To avoid issues while setting up customer-managed data encryption during restore
For Azure Database for MySQL, the support for encryption of data at rest using customers managed key (CMK) has few limitations - * Support for this functionality is limited to **General Purpose** and **Memory Optimized** pricing tiers.
-* This feature is only supported in regions and servers, which support storage up to 16 TB. For the list of Azure regions supporting storage up to 16 TB, refer to the storage section in documentation [here](concepts-pricing-tiers.md#storage)
+* This feature is only supported in regions and servers, which support general purpose storage v2 (up to 16 TB). For the list of Azure regions supporting storage up to 16 TB, refer to the storage section in documentation [here](concepts-pricing-tiers.md#storage)
> [!NOTE]
- > - All new MySQL servers created in the regions listed above, support for encryption with customer manager keys is **available**. Point In Time Restored (PITR) server or read replica will not qualify though in theory they are ΓÇÿnewΓÇÖ.
- > - To validate if your provisioned server supports up to 16TB, you can go to the pricing tier blade in the portal and see the max storage size supported by your provisioned server. If you can move the slider up to 4TB, your server may not support encryption with customer managed keys. However, the data is encrypted using service managed keys at all times. Please reach out to AskAzureDBforMySQL@service.microsoft.com if you have any questions.
+ > - All new MySQL servers created in the [Azure regions](concepts-pricing-tiers.md#storage) supporting general purpose storage v2, support for encryption with customer manager keys is **available**. Point In Time Restored (PITR) server or read replica will not qualify though in theory they are ΓÇÿnewΓÇÖ.
+ > - To validate if your provisioned server general purpose storage v2, you can go to the pricing tier blade in the portal and see the max storage size supported by your provisioned server. If you can move the slider up to 4TB, your server is on general purpose storage v1 and will not support encryption with customer managed keys. However, the data is encrypted using service managed keys at all times. Please reach out to AskAzureDBforMySQL@service.microsoft.com if you have any questions.
* Encryption is only supported with RSA 2048 cryptographic key. ## Next steps
-Learn how to set up data encryption with a customer-managed key for your Azure database for MySQL by using the [Azure portal](howto-data-encryption-portal.md) and [Azure CLI](howto-data-encryption-cli.md).
+* Learn how to set up data encryption with a customer-managed key for your Azure database for MySQL by using the [Azure portal](howto-data-encryption-portal.md) and [Azure CLI](howto-data-encryption-cli.md).
+* Learn about the storage type support for [Azure Database for MySQL - Single Server](concepts-pricing-tiers.md#storage)
mysql Concepts Infrastructure Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-infrastructure-double-encryption.md
Azure Database for MySQL uses storage [encryption of data at-rest](concepts-secu
Infrastructure double encryption adds a second layer of encryption using service-managed keys. It uses FIPS 140-2 validated cryptographic module, but with a different encryption algorithm. This provides an additional layer of protection for your data at rest. The key used in Infrastructure double encryption is also managed by the Azure Database for MySQL service. Infrastructure double encryption is not enabled by default since the additional layer of encryption can have a performance impact. > [!NOTE]
-> This feature is only supported for "General Purpose" and "Memory Optimized" pricing tiers in Azure Database for MySQL.
+> Like data encryption at rest, this feature is supported only on "General Purpose storage v2 (support up to 16TB)" storage available in General Purpose and Memory Optimized pricing tiers. Refer [Storage concepts](concepts-pricing-tiers.md#storage) for more details. For other limitations, refer to the [limitation](concepts-infrastructure-double-encryption.md#limitations) section.
Infrastructure Layer encryption has the benefit of being implemented at the layer closest to the storage device or network wires. Azure Database for MySQL implements the two layers of encryption using service-managed keys. Although still technically in the service layer, it is very close to hardware that stores the data at rest. You can still optionally enable data encryption at rest using [customer managed key](concepts-data-encryption-mysql.md) for the provisioned MySQL server. Implementation at the infrastructure layers also supports a diversity of keys. Infrastructure must be aware of different clusters of machine and networks. As such, different keys are used to minimize the blast radius of infrastructure attacks and a variety of hardware and network failures. > [!NOTE]
-> Using Infrastructure double encryption will have performance impact on the Azure Database for MySQL server due to the additional encryption process.
+> Using Infrastructure double encryption will have 5-10% impact on the throughput of your Azure Database for MySQL server due to the additional encryption process.
## Benefits
The encryption capabilities that are provided by Azure Database for MySQL can be
## Limitations
-For Azure Database for MySQL, the support for infrastructure double encryption using service-managed key has the following limitations:
+For Azure Database for MySQL, the support for infrastruction double encryption has few limitations -
* Support for this functionality is limited to **General Purpose** and **Memory Optimized** pricing tiers.
-* This feature is only supported in regions and servers, which support storage up to 16 TB. For the list of Azure regions supporting storage up to 16 TB, refer to the [storage documentation](concepts-pricing-tiers.md#storage).
+* This feature is only supported in regions and servers, which support general purpose storage v2 (up to 16 TB). For the list of Azure regions supporting storage up to 16 TB, refer to the storage section in documentation [here](concepts-pricing-tiers.md#storage)
> [!NOTE]
- > - All **new** MySQL servers created in the regions listed above also support data encryption with customer manager keys. In this case, servers created through point-in-time restore (PITR) or read replicas do not qualify as "new".
- > - To validate if your provisioned server supports up to 16 TB, you can go to the pricing tier blade in the portal and see if the storage slider can be moved up to 16 TB. If you can only move the slider up to 4 TB, your server may not support encryption with customer managed keys; however, the data is encrypted using service-managed keys at all times. Please reach out to AskAzureDBforMySQL@service.microsoft.com if you have any questions.
+ > - All new MySQL servers created in the [Azure regions](concepts-pricing-tiers.md#storage) supporting general purpose storage v2, support for encryption with customer manager keys is **available**. Point In Time Restored (PITR) server or read replica will not qualify though in theory they are ΓÇÿnewΓÇÖ.
+ > - To validate if your provisioned server general purpose storage v2, you can go to the pricing tier blade in the portal and see the max storage size supported by your provisioned server. If you can move the slider up to 4TB, your server is on general purpose storage v1 and will not support encryption with customer managed keys. However, the data is encrypted using service managed keys at all times. Please reach out to AskAzureDBforMySQL@service.microsoft.com if you have any questions.
++ ## Next steps
mysql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-limits.md
The following are unsupported:
- Dynamic scaling to and from the Basic pricing tiers is currently not supported. - Decreasing server storage size is not supported.
-### Server version upgrades
-- Automated migration between major database engine versions is currently not supported. If you would like to upgrade to the next major version, take a [dump and restore](./concepts-migrate-dump-restore.md) it to a server that was created with the new engine version.
+### Major version upgrades
+- [Major version upgrade is supported for v5.6 to v5.7 upgrades only](how-to-major-version-upgrade.md). Upgrades to v8.0 is not supported yet.
### Point-in-time-restore - When using the PITR feature, the new server is created with the same configurations as the server it is based on.
The following are unsupported:
- Support for VNet service endpoints is only for General Purpose and Memory Optimized servers. ### Storage size-- Please refer to [pricing tiers](concepts-pricing-tiers.md) for the storage size limits per pricing tier.
+- Please refer to [pricing tiers](concepts-pricing-tiers.md#storage) for the storage size limits per pricing tier.
## Current known issues - MySQL server instance displays the wrong server version after connection is established. To get the correct server instance engine version, use the `select version();` command.
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/concepts-server-parameters.md
The binary logging format is always **ROW** and all connections to the server **
Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size) to learn more about this parameter.
-#### Servers supporting up to 4 TB storage
+#### Servers on [general purpose storage v1 (supporting up to 4-TB)](concepts-pricing-tiers.md#general-purpose-storage-v1-supports-up-to-4-tb)
|**Pricing Tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| ||||||
Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/innodb-
|Memory Optimized|16|65498251264|134217728|65498251264| |Memory Optimized|32|132070244352|134217728|132070244352|
-#### Servers support up to 16 TB storage
+#### Servers on [general purpose storage v1 (supporting up to 16-TB)](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage)
|**Pricing Tier**|**vCore(s)**|**Default value (bytes)**|**Min value (bytes)**|**Max value (bytes)**| ||||||
Review the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/innodb-
### innodb_file_per_table > [!NOTE]
-> `innodb_file_per_table` can only be updated in the General Purpose and Memory Optimized pricing tiers.
+> `innodb_file_per_table` can only be updated in the General Purpose and Memory Optimized pricing tiers on [general purpose storage v2](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage).
MySQL stores the InnoDB table in different tablespaces based on the configuration you provided during the table creation. The [system tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-system-tablespace.html) is the storage area for the InnoDB data dictionary. A [file-per-table tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-file-per-table-tablespaces.html) contains data and indexes for a single InnoDB table, and is stored in the file system in its own data file. This behavior is controlled by the `innodb_file_per_table` server parameter. Setting `innodb_file_per_table` to `OFF` causes InnoDB to create tables in the system tablespace. Otherwise, InnoDB creates tables in file-per-table tablespaces.
-Azure Database for MySQL supports at largest, **4 TB**, in a single data file. If your database size is larger than 4 TB, you should create the table in [innodb_file_per_table](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_file_per_table) tablespace. If you have a single table size larger than 4 TB, you should use the partition table.
+Azure Database for MySQL supports at largest, **4-TB**, in a single data file on [general purpose storage v2](concepts-pricing-tiers.md#general-purpose-storage-v2-supports-up-to-16-tb-storage). If your database size is larger than 4 TB, you should create the table in [innodb_file_per_table](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_file_per_table) tablespace. If you have a single table size larger than 4-TB, you should use the partition table.
### join_buffer_size
Other variables not listed here are set to the default MySQL out-of-the-box valu
- Learn how to [configure sever parameters using the Azure portal](./howto-server-parameters.md) - Learn how to [configure sever parameters using the Azure CLI](./howto-configure-server-parameters-using-cli.md)-- Learn how to [configure sever parameters using PowerShell](./howto-configure-server-parameters-using-powershell.md)
+- Learn how to [configure sever parameters using PowerShell](./howto-configure-server-parameters-using-powershell.md)
mysql Howto Data Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-data-encryption-cli.md
You can verify the above attributes of the key by using the following command:
```azurecli-interactive az keyvault key show --vault-name <key_vault_name> -n <key_name> ```
+* The Azure Database for MySQL - Single Server should be on General Purpose or Memory Optimized pricing tier and on general purpose storage v2. Before you proceed further, refer limitations for [data encryption with customer managed keys](concepts-data-encryption-mysql.md#limitations).
## Set the right permissions for key operations
Additionally, you can use Azure Resource Manager templates to enable data encryp
## Next steps
- To learn more about data encryption, see [Azure Database for MySQL data encryption with customer-managed key](concepts-data-encryption-mysql.md).
+* [Validating data encryption for Azure Database for MySQL](howto-data-encryption-validation.md)
+* [Troubleshoot data encryption in Azure Database for MySQL](howto-data-encryption-troubleshoot.md)
+* [Data encryption with customer-managed key concepts](concepts-data-encryption-mysql.md).
+
mysql Howto Data Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-data-encryption-portal.md
You can verify the above attributes of the key by using the following command:
```azurecli-interactive az keyvault key show --vault-name <key_vault_name> -n <key_name> ```-
+* The Azure Database for MySQL - Single Server should be on General Purpose or Memory Optimized pricing tier and on general purpose storage v2. Before you proceed further, refer limitations for [data encryption with customer managed keys](concepts-data-encryption-mysql.md#limitations).
## Set the right permissions for key operations 1. In Key Vault, select **Access policies** > **Add Access Policy**.
After Azure Database for MySQL is encrypted with a customer's managed key stored
## Next steps
- To learn more about data encryption, see [Azure Database for MySQL data encryption with customer-managed key](concepts-data-encryption-mysql.md).
+* [Validating data encryption for Azure Database for MySQL](howto-data-encryption-validation.md)
+* [Troubleshoot data encryption in Azure Database for MySQL](howto-data-encryption-troubleshoot.md)
+* [Data encryption with customer-managed key concepts](concepts-data-encryption-mysql.md).
mysql Howto Double Encryption https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/howto-double-encryption.md
Learn how to use the how set up and manage Infrastructure double encryption for
## Prerequisites * You must have an Azure subscription and be an administrator on that subscription.
+* The Azure Database for MySQL - Single Server should be on General Purpose or Memory Optimized pricing tier and on general purpose storage v2. Before you proceed further, refer limitations for [infrastructure double encryption](concepts-infrastructure-double-encryption.md#limitations).
## Create an Azure Database for MySQL server with Infrastructure Double encryption - Portal
security-center Defender For Dns Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/defender-for-dns-introduction.md
Title: Azure Defender for DNS - the benefits and features
description: Learn about the benefits and features of Azure Defender for DNS Previously updated : 05/12/2021 Last updated : 07/25/2021
Azure Defender for DNS provides an additional layer of protection for your resou
|-|:-| |Release state:|General Availability (GA)| |Pricing:|**Azure Defender for DNS** is billed as shown on [Security Center pricing](https://azure.microsoft.com/pricing/details/security-center/)|
-|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National/Sovereign (US Gov, Azure China)|
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure China<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government|
||| ## What are the benefits of Azure Defender for DNS?
security-center Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/recommendations-reference.md
description: This article lists Azure Security Center's security recommendations
Previously updated : 07/06/2021 Last updated : 07/25/2021
security-center Security Center Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/security-center-services.md
Previously updated : 07/21/2021 Last updated : 07/25/2021
For information about when recommendations are generated for each of these prote
| Feature/Service | Azure | Azure Government | Azure China | |--|-|--|| | **Security Center free features** | | | |
-| - [Continuous export](./continuous-export.md) | GA | GA | GA |
-| - [Workflow automation](./continuous-export.md) | GA | GA | GA |
-| - [Recommendation exemption rules](./exempt-resource.md) | Public Preview | Not Available | Not Available |
-| - [Alert suppression rules](./alerts-suppression-rules.md) | GA | GA | GA |
-| - [Email notifications for security alerts](./security-center-provide-security-contact-details.md) | GA | GA | GA |
-| - [Auto provisioning for agents and extensions](./security-center-enable-data-collection.md) | GA | GA | GA |
-| - [Asset inventory](./asset-inventory.md) | GA | GA | GA |
-| - [Azure Monitor Workbooks reports in Azure Security Center's workbooks gallery](./custom-dashboards-azure-workbooks.md) | GA | GA | GA |
+| - [Continuous export](./continuous-export.md) | GA | GA | GA |
+| - [Workflow automation](./continuous-export.md) | GA | GA | GA |
+| - [Recommendation exemption rules](./exempt-resource.md) | Public Preview | Not Available | Not Available |
+| - [Alert suppression rules](./alerts-suppression-rules.md) | GA | GA | GA |
+| - [Email notifications for security alerts](./security-center-provide-security-contact-details.md) | GA | GA | GA |
+| - [Auto provisioning for agents and extensions](./security-center-enable-data-collection.md) | GA | GA | GA |
+| - [Asset inventory](./asset-inventory.md) | GA | GA | GA |
+| - [Azure Monitor Workbooks reports in Azure Security Center's workbooks gallery](./custom-dashboards-azure-workbooks.md) | GA | GA | GA |
| **Azure Defender plans and extensions** | | | | | - [Azure Defender for servers](/azure/security-center/defender-for-servers-introduction) | GA | GA | GA | | - [Azure Defender for App Service](/azure/security-center/defender-for-app-service-introduction) | GA | Not Available | Not Available |
-| - [Azure Defender for DNS](/azure/security-center/defender-for-dns-introduction) | GA | Not Available | Not Available |
+| - [Azure Defender for DNS](/azure/security-center/defender-for-dns-introduction) | GA | Not Available | GA |
| - [Azure Defender for container registries](/azure/security-center/defender-for-container-registries-introduction) <sup>[1](#footnote1)</sup> | GA | GA <sup>[2](#footnote2)</sup> | GA <sup>[2](#footnote2)</sup> | | - [Azure Defender for container registries scanning of images in CI/CD workflows](/azure/security-center/defender-for-container-registries-cicd) <sup>[3](#footnote3)</sup> | Public Preview | Not Available | Not Available | | - [Azure Defender for Kubernetes](/azure/security-center/defender-for-kubernetes-introduction) <sup>[4](#footnote4)</sup> | GA | GA | GA |
security-center Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/upcoming-changes.md
Previously updated : 07/22/2021 Last updated : 07/25/2021
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |||
-| [CSV exports to be limited to 20 MB](#csv-exports-to-be-limited-to-20-mb) | July 2021 |
| [Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013](#legacy-implementation-of-iso-27001-is-being-replaced-with-new-iso-270012013) | July 2021 | | [Deprecating recommendation 'Log Analytics agent health issues should be resolved on your machines'](#deprecating-recommendation-log-analytics-agent-health-issues-should-be-resolved-on-your-machines) | July 2021 | | [Logical reorganization of Azure Defender for Resource Manager alerts](#logical-reorganization-of-azure-defender-for-resource-manager-alerts) | August 2021 |
-| [Enhancements to recommendation to classify sensitive data in SQL databases](#enhancements-to-recommendation-to-classify-sensitive-data-in-sql-databases) | Q3 2021 |
+| [CSV exports to be limited to 20 MB](#csv-exports-to-be-limited-to-20-mb) | August 2021 |
| [Enable Azure Defender security control to be included in secure score](#enable-azure-defender-security-control-to-be-included-in-secure-score) | Q3 2021 |
-| | |
+| [Enhancements to recommendation to classify sensitive data in SQL databases](#enhancements-to-recommendation-to-classify-sensitive-data-in-sql-databases) | Q1 2022 || | |
-### CSV exports to be limited to 20 MB
-
-**Estimated date for change:** July 2021
-
-When exporting Security Center recommendations data, there's currently no limit on the amount of data that you can download.
--
-With this change, we're instituting a limit of 20 MB.
-
-If you need to export larger amounts of data, use the available filters before selecting, or select subsets of your subscriptions and download the data in batches.
--
-Learn more about [performing a CSV export of your security recommendations](continuous-export.md#manual-one-time-export-of-alerts-and-recommendations).
- ### Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013 **Estimated date for change:** July 2021
These are the alerts that are currently part of Azure Defender for Resource Mana
Learn more about the [Azure Defender for Resource Manager](defender-for-resource-manager-introduction.md) and [Azure Defender for servers](defender-for-servers-introduction.md).
-### Enhancements to recommendation to classify sensitive data in SQL databases
+### CSV exports to be limited to 20 MB
-**Estimated date for change:** Q3 2021
+**Estimated date for change:** August 2021
-The recommendation **Sensitive data in your SQL databases should be classified** in the **Apply data classification** security control will be replaced with a new version that's better aligned with Microsoft's data classification strategy. As a result the recommendation's ID will also change (currently, it's b0df6f56-862d-4730-8597-38c0fd4ebd59).
+When exporting Security Center recommendations data, there's currently no limit on the amount of data that you can download.
++
+With this change, we're instituting a limit of 20 MB.
+
+If you need to export larger amounts of data, use the available filters before selecting, or select subsets of your subscriptions and download the data in batches.
++
+Learn more about [performing a CSV export of your security recommendations](continuous-export.md#manual-one-time-export-of-alerts-and-recommendations).
### Enable Azure Defender security control to be included in secure score
With this change, there will be an impact on the secure score of any subscriptio
Learn more in [Quickstart: Enable Azure Defender](enable-azure-defender.md).
+### Enhancements to recommendation to classify sensitive data in SQL databases
+
+**Estimated date for change:** Q1 2022
+
+The recommendation **Sensitive data in your SQL databases should be classified** in the **Apply data classification** security control will be replaced with a new version that's better aligned with Microsoft's data classification strategy. As a result the recommendation's ID will also change (currently, it's b0df6f56-862d-4730-8597-38c0fd4ebd59).
+ ## Next steps
storage Storage Files Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/files/storage-files-introduction.md
Title: Introduction to Azure Files | Microsoft Docs
-description: An overview of Azure Files, a service that enables you to create and use network file shares in the cloud using the industry standard SMB protocol.
+description: An overview of Azure Files, a service that enables you to create and use network file shares in the cloud using either SMB or NFS protocols.
Previously updated : 09/15/2020 Last updated : 07/23/2021 # What is Azure Files?
-Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard [Server Message Block (SMB) protocol](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) or [Network File System (NFS) protocol](https://en.wikipedia.org/wiki/Network_File_System). Azure file shares can be mounted concurrently by cloud or on-premises deployments. Azure Files SMB file shares are accessible from Windows, Linux, and macOS clients. Azure Files NFS file shares are accessible from Linux or macOS clients. Additionally, Azure Files SMB file shares can be cached on Windows Servers with Azure File Sync for fast access near where the data is being used.
+Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard [Server Message Block (SMB) protocol](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) or [Network File System (NFS) protocol](https://en.wikipedia.org/wiki/Network_File_System). Azure Files file shares can be mounted concurrently by cloud or on-premises deployments. SMB Azure file shares are accessible from Windows, Linux, and macOS clients. NFS Azure Files shares are accessible from Linux or macOS clients. Additionally, SMB Azure file shares can be cached on Windows Servers with [Azure File Sync](../file-sync/file-sync-introduction.md) for fast access near where the data is being used.
Here are some videos on the common use cases of Azure Files:
-* [Replace your file server with a serverless Azure File Share](https://sec.ch9.ms/ch9/3358/0addac01-3606-4e30-ad7b-f195f3ab3358/ITOpsTalkAzureFiles_high.mp4)
+* [Replace your file server with a serverless Azure file share](https://sec.ch9.ms/ch9/3358/0addac01-3606-4e30-ad7b-f195f3ab3358/ITOpsTalkAzureFiles_high.mp4)
* [Getting started with FSLogix profile containers on Azure Files in Windows Virtual Desktop leveraging AD authentication](https://www.youtube.com/embed/9S5A1IJqfOQ) ## Why Azure Files is useful Azure file shares can be used to: * **Replace or supplement on-premises file servers**:
- Azure Files can be used to completely replace or supplement traditional on-premises file servers or NAS devices. Popular operating systems such as Windows, macOS, and Linux can directly mount Azure file shares wherever they are in the world. Azure File SMB file shares can also be replicated with Azure File Sync to Windows Servers, either on-premises or in the cloud, for performance and distributed caching of the data where it's being used. With the recent release of [Azure Files AD Authentication](storage-files-active-directory-overview.md), Azure File SMB file shares can continue to work with AD hosted on-premises for access control.
+ Azure Files can be used to completely replace or supplement traditional on-premises file servers or NAS devices. Popular operating systems such as Windows, macOS, and Linux can directly mount Azure file shares wherever they are in the world. SMB Azure file shares can also be replicated with Azure File Sync to Windows Servers, either on-premises or in the cloud, for performance and distributed caching of the data where it's being used. With the recent release of [Azure Files AD Authentication](storage-files-active-directory-overview.md), SMB Azure file shares can continue to work with AD hosted on-premises for access control.
* **"Lift and shift" applications**: Azure Files makes it easy to "lift and shift" applications to the cloud that expect a file share to store file application or user data. Azure Files enables both the "classic" lift and shift scenario, where both the application and its data are moved to Azure, and the "hybrid" lift and shift scenario, where the application data is moved to Azure Files, and the application continues to run on-premises.
storage Storage Quickstart Queues Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/queues/storage-quickstart-queues-python.md
Title: 'Quickstart: Azure Queue Storage client library v12 - Python'
description: Learn how to use the Azure Queue Storage client library v12 for Python to create a queue and add messages to it. Then learn how to read and delete messages from the queue. You'll also learn how to delete a queue. Previously updated : 12/10/2019 Last updated : 07/23/2021
virtual-desktop Deploy Azure Ad Joined Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/deploy-azure-ad-joined-vm.md
Previously updated : 07/14/2021 Last updated : 07/23/2021 # Deploy Azure AD joined virtual machines in Azure Virtual Desktop
This article will walk you through the process of deploying and accessing Azure
The following configurations are currently supported with Azure AD-joined VMs: - Personal desktops with local user profiles.-- Pooled desktops used as a jump box. In this configuration, users first access the Azure Virtual Desktop VM before connecting to a different PC on the network. Users should not save data on the VM.
+- Pooled desktops used as a jump box. In this configuration, users first access the Azure Virtual Desktop VM before connecting to a different PC on the network. Users shouldn't save data on the VM.
- Pooled desktops or apps where users don't need to save data on the VM. For example, for applications that save data online or connect to a remote database.
-User accounts can be cloud-only or hybrid users from the same Azure AD tenant. External users are not supported at this time.
+User accounts can be cloud-only or hybrid users from the same Azure AD tenant. External users aren't supported at this time.
## Deploy Azure AD-joined VMs
To enable access from Windows devices not joined to Azure AD, add **targetisaadj
To access Azure AD-joined VMs using the web, Android, macOS, iOS, and Microsoft Store clients, you must add **targetisaadjoined:i:1** as a [custom RDP property](customize-rdp-properties.md) to the host pool. These connections are restricted to entering user name and password credentials when signing in to the session host.
+### Enabling MFA for Azure AD joined VMs
+
+You can enable [multifactor authentication](set-up-mfa.md) for Azure AD joined VMs by setting a Conditional Access policy on the "Windows Virtual Desktop" app. Unless you want to restrict sign in to strong authentication methods like Windows Hello, you should exclude the "Azure Windows VM Sign-In" app from the list of cloud apps as described in the [MFA sign-in method requirements](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#mfa-sign-in-method-required) for Azure AD joined VMs. If you are using non-Windows clients, you must disable the MFA policy on "Azure Windows VM Sign-In".
+ ## User profiles Azure Virtual Desktop currently only supports local profiles for Azure AD-joined VMs.
virtual-desktop Troubleshoot Azure Ad Connections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/troubleshoot-azure-ad-connections.md
Previously updated : 07/14/2021 Last updated : 07/23/2021 # Connections to Azure AD-joined VMs
Use this article to resolve issues with connections to Azure AD-joined VMs in Az
Visit the [Azure Virtual Desktop Tech Community](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/bd-p/AzureVirtualDesktopForum) to discuss the Azure Virtual Desktop service with the product team and active community members.
-## The logon attempt failed
+## All clients
-If you encounter an error saying **The logon attempt failed** on the Windows Security credential prompt, verify the following:
+### Your account is configured to prevent you from using this device
+
+If you come across an error saying **Your account is configured to prevent you from using this device. For more information, contact your system administrator**, ensure the user account was given the [Virtual Machine User Login role](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#azure-role-not-assigned) on the VMs.
+
+## Windows Desktop client
+
+### The logon attempt failed
+
+If you come across an error saying **The logon attempt failed** on the Windows Security credential prompt, verify the following:
- You are on a device that is Azure AD-joined or hybrid Azure AD-joined to the same Azure AD tenant as the session host OR - You are on a device running Windows 10 2004 or later that is Azure AD registered to the same Azure AD tenant as the session host - The [PKU2U protocol is enabled](/windows/security/threat-protection/security-policy-settings/network-security-allow-pku2u-authentication-requests-to-this-computer-to-use-online-identities) on both the local PC and the session host
-## Your account is configured to prevent you from using this device
+### The sign-in method you're trying to use isn't allowed
+
+If you come across an error saying **The sign-in method you're trying to use isn't allowed. Try a different sign-in method or contact your system administrator**, you have Conditional Access policies restricting the type of credentials that can be used to sign in to the VMs. Ensure you use the right credential type when signing in or update your [Conditional Access policies](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md#mfa-sign-in-method-required).
+
+## Web client
+
+### Sign in failed. Please check your username and password and try again
+
+If you come across an error saying **Oops, we couldn't connect to NAME. Sign in failed. Please check your username and password and try again.** when using the web client, ensure that you [enabled connections from other clients](deploy-azure-ad-joined-vm.md#connect-using-the-other-clients).
+
+### We couldn't connect to the remote PC because of a security error
+
+If you come across an error saying **Oops, we couldn't connect to NAME. We couldn't connect to the remote PC because of a security error. If this keeps happening, ask your admin or tech support for help.**, you have Conditional Access policies restricting the type of credentials that can be used to sign in to the VMs. This isn't supported for this client. Follow the instructions to [enable multifactor authentication](deploy-azure-ad-joined-vm.md#enabling-mfa-for-azure-ad-joined-vms) for Azure AD joined VMs.
-If you encounter an error saying **Your account is configured to prevent you from using this device. For more information, contact your system administrator**, ensure the user account was given the [Virtual Machine User Login role](../active-directory/devices/howto-vm-sign-in-azure-ad-windows.md) on the VMs.
+## Android client
-## The sign-in method you're trying to use isn't allowed
+### Error code 2607 - We couldn't connect to the remote PC because your credentials did not work
-If you encounter an error saying **The sign-in method you're trying to use isn't allowed. Try a different sign-in method or contact your system administrator**, you have some Conditional Access policies restricting the type of credentials that be used to sign-in to the VMs. Ensure you use the right credential type when signing in.
+If you come across an error saying **We couldn't connect to the remote PC because your credentials did not work. The remote machine is AADJ joined.** with error code 2607 when using the Android client, ensure that you [enabled connections from other clients](deploy-azure-ad-joined-vm.md#connect-using-the-other-clients).
## Next steps