Updates from: 10/06/2022 01:14:23
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Functions For Customizing Application Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
Takes a source string value and appends the suffix to the end of it.
#### Append constant suffix to user name
-Example: If you are using a Salesforce Sandbox, you might need to append another suffix to all your user names before synchronizing them.
+Example: If you're using a Salesforce Sandbox, you might need to append another suffix to all your user names before synchronizing them.
**Expression:** `Append([userPrincipalName], ".test")`
Returns True if both attributes have the same value.
`CDate(expression)` **Description:**
-The CDate function returns a UTC DateTime from a string. DateTime is not a native attribute type but it can be used within date functions such as [FormatDateTime](#formatdatetime) and [DateAdd](#dateadd).
+The CDate function returns a UTC DateTime from a string. DateTime isn't a native attribute type but it can be used within date functions such as [FormatDateTime](#formatdatetime) and [DateAdd](#dateadd).
**Parameters:**
The returned string is always in UTC and follows the format **M/d/yyyy h:mm:ss t
Coalesce(source1, source2, ..., defaultValue) **Description:**
-Returns the first source value that is not NULL. If all arguments are NULL and defaultValue is present, the defaultValue will be returned. If all arguments are NULL and defaultValue is not present, Coalesce returns NULL.
+Returns the first source value that isn't NULL. If all arguments are NULL and defaultValue is present, the defaultValue will be returned. If all arguments are NULL and defaultValue isn't present, Coalesce returns NULL.
**Parameters:**
Returns the first source value that is not NULL. If all arguments are NULL and d
| **defaultValue** | Optional | String | Default value to be used when all source values are NULL. Can be empty string (""). #### Flow mail value if not NULL, otherwise flow userPrincipalName
-Example: You wish to flow the mail attribute if it is present. If it is not, you wish to flow the value of userPrincipalName instead.
+Example: You wish to flow the mail attribute if it is present. If it isn't, you wish to flow the value of userPrincipalName instead.
**Expression:** `Coalesce([mail],[userPrincipalName])`
Returns a date/time string representing a date to which a specified time interva
| **value** |Required | Number | The number of units you want to add. It can be positive (to get dates in the future) or negative (to get dates in the past). | | **dateTime** |Required | DateTime | DateTime representing date to which the interval is added. |
-When passing a date string as input use [CDate](#cdate) function to wrap the datetime string. To get system time in UTC use the [Now](#now) function.
+When passing a date string as input, use [CDate](#cdate) function to wrap the datetime string. To get system time in UTC, use the [Now](#now) function.
The **interval** string must have one of the following values: * yyyy Year
This function uses the *interval* parameter to return a number that indicates th
| **date1** |Required | DateTime | DateTime representing a valid date. | | **date2** |Required | DateTime | DateTime representing a valid date. |
-When passing a date string as input use [CDate](#cdate) function to wrap the datetime string. To get system time in UTC use the [Now](#now) function.
+When passing a date string as input, use [CDate](#cdate) function to wrap the datetime string. To get system time in UTC, use the [Now](#now) function.
The **interval** string must have one of the following values: * yyyy Year
Takes a date string from one format and converts it into a different format.
| Name | Required/ Repeating | Type | Notes | | | | | | | **source** |Required |String |Usually name of the attribute from the source object. |
-| **dateTimeStyles** | Optional | String | Use this to specify the formatting options that customize string parsing for some date and time parsing methods. For supported values, see [DateTimeStyles doc](/dotnet/api/system.globalization.datetimestyles). If left empty, the default value used is DateTimeStyles.RoundtripKind, DateTimeStyles.AllowLeadingWhite, DateTimeStyles.AllowTrailingWhite |
+| **dateTimeStyles** | Optional | String | Use this parameter to specify the formatting options that customize string parsing for some date and time parsing methods. For supported values, see [DateTimeStyles doc](/dotnet/api/system.globalization.datetimestyles). If left empty, the default value used is DateTimeStyles.RoundtripKind, DateTimeStyles.AllowLeadingWhite, DateTimeStyles.AllowTrailingWhite |
| **inputFormat** |Required |String |Expected format of the source value. For supported formats, see [.NET custom date and time format strings](/dotnet/standard/base-types/custom-date-and-time-format-strings). | | **outputFormat** |Required |String |Format of the output date. |
The above expression will drop the department attribute from the provisioning fl
**Example 2: Don't flow an attribute if the expression mapping evaluates to empty string or null** <br> Let's say the SuccessFactors attribute *prefix* is mapped to the on-premises Active Directory attribute *personalTitle* using the following expression mapping: <br> `IgnoreFlowIfNullOrEmpty(Switch([prefix], "", "3443", "Dr.", "3444", "Prof.", "3445", "Prof. Dr."))` <br>
-The above expression first evaluates the [Switch](#switch) function. If the *prefix* attribute does not have any of the values listed within the *Switch* function, then *Switch* will return an empty string and the attribute *personalTitle* will not be included in the provisioning flow to on-premises Active Directory.
+The above expression first evaluates the [Switch](#switch) function. If the *prefix* attribute doesn't have any of the values listed within the *Switch* function, then *Switch* will return an empty string and the attribute *personalTitle* will not be included in the provisioning flow to on-premises Active Directory.
### IIF
The following comparison operators can be used in the *condition*:
`IIF([country]="USA",[country],[department])` #### Known limitations and workarounds for IIF function
-* The IIF function currently does not support AND and OR logical operators.
+* The IIF function currently doesn't support AND and OR logical operators.
* To implement AND logic, use nested IIF statement chained along the *trueValue* path. Example: If country="USA" and state="CA", return value "True", else return "False". `IIF([country]="USA",IIF([state]="CA","True","False"),"False")`
If the expression evaluates to Null, then the IsNull function returns true. For
**Example:** `IsNull([displayName])`
-Returns True if the attribute is not present.
+Returns True if the attribute isn't present.
### IsNullorEmpty
The inverse of this function is named IsPresent.
**Example:** `IsNullOrEmpty([displayName])`
-Returns True if the attribute is not present or is an empty string.
+Returns True if the attribute isn't present or is an empty string.
### IsPresent
Returns True if the attribute is not present or is an empty string.
IsPresent(Expression) **Description:**
-If the expression evaluates to a string that is not Null and is not empty, then the IsPresent function returns true. The inverse of this function is named IsNullOrEmpty.
+If the expression evaluates to a string that isn't Null and isn't empty, then the IsPresent function returns true. The inverse of this function is named IsNullOrEmpty.
**Parameters:**
The Item function returns one item from a multi-valued string/attribute.
| **index** |Required |Integer | Index to an item in the multi-valued string| **Example:**
-`Item([proxyAddresses], 1)` returns the first item in the multi-valued attribute. Index 0 should not be used.
+`Item([proxyAddresses], 1)` returns the first item in the multi-valued attribute. Index 0 shouldn't be used.
### Join
Returns a substring of the source value. A substring is a string that contains o
| | | | | | **source** |Required |String |Usually name of the attribute. | | **start** |Required |Integer |Index in the **source** string where substring should start. First character in the string will have index of 1, second character will have index 2, and so on. |
-| **length** |Required |Integer |Length of the substring. If length ends outside the **source** string, function will return substring from **start** index untill end of **source** string. |
+| **length** |Required |Integer |Length of the substring. If length ends outside the **source** string, function will return substring from **start** index until end of **source** string. |
### NormalizeDiacritics
The PCase function converts the first character of each word in a string to uppe
**Remarks:**
-* If the *wordSeparators* parameter is not specified, then PCase internally invokes the .NET function [ToTitleCase](/dotnet/api/system.globalization.textinfo.totitlecase) to convert the *source* string to proper case. The .NET function *ToTitleCase* supports a comprehensive set of the [Unicode character categories](https://www.unicode.org/reports/tr44/#General_Category_Values) as word separators.
+* If the *wordSeparators* parameter isn't specified, then PCase internally invokes the .NET function [ToTitleCase](/dotnet/api/system.globalization.textinfo.totitlecase) to convert the *source* string to proper case. The .NET function *ToTitleCase* supports a comprehensive set of the [Unicode character categories](https://www.unicode.org/reports/tr44/#General_Category_Values) as word separators.
* Space character * New line character * *Control* characters like CRLF
The PCase function converts the first character of each word in a string to uppe
**Example:**
-Let's say you are sourcing the attributes *firstName* and *lastName* from SAP SuccessFactors and in HR both these attributes are in upper-case. Using the PCase function, you can convert the name to proper case as shown below.
+Let's say you're sourcing the attributes *firstName* and *lastName* from SAP SuccessFactors and in HR both these attributes are in upper-case. Using the PCase function, you can convert the name to proper case as shown below.
| Expression | Input | Output | Notes | | | | | |
-| `PCase([firstName])` | *firstName* = "PABLO GONSALVES (SECOND)" | "Pablo Gonsalves (Second)" | As the *wordSeparators* parameter is not specified, the *PCase* function uses the default word separators character set. |
+| `PCase([firstName])` | *firstName* = "PABLO GONSALVES (SECOND)" | "Pablo Gonsalves (Second)" | As the *wordSeparators* parameter isn't specified, the *PCase* function uses the default word separators character set. |
| `PCase([lastName]," '-")` | *lastName* = "PINTO-DE'SILVA" | "Pinto-De'Silva" | The *PCase* function uses characters in the *wordSeparators* parameter to identify words and transform them to proper case. |
-| `PCase(Join(" ",[firstName],[lastName]))` | *firstName* = GREGORY, *lastName* = "JAMES" | "Gregory James" | You can nest the Join function within PCase. As the *wordSeparators* parameter is not specified, the *PCase* function uses the default word separators character set. |
+| `PCase(Join(" ",[firstName],[lastName]))` | *firstName* = GREGORY, *lastName* = "JAMES" | "Gregory James" | You can nest the Join function within PCase. As the *wordSeparators* parameter isn't specified, the *PCase* function uses the default word separators character set. |
### RandomString **Function:**
-RandomString(Length, MinimumNumbers, MinimumSpecialCharacters , MinimumCapital, MinimumLowerCase, CharactersToAvoid)
+RandomString(Length, MinimumNumbers, MinimumSpecialCharacters, MinimumCapital, MinimumLowerCase, CharactersToAvoid)
**Description:** The RandomString function generates a random string based on the conditions specified. Characters allowed can be identified [here](/windows/security/threat-protection/security-policy-settings/password-must-meet-complexity-requirements#reference).
Then in this case, you can use the following expression in your attribute mappin
**Example 2:** Using **oldValue** and **template** to insert the source string into another *templatized* string. The parameter **oldValue** is a misnomer in this scenario. It is actually the value that will get replaced.
-Let's say you want to always generate login id in the format `<username>@contoso.com`. There is a source attribute called **UserID** and you want that value to be used for the `<username>` portion of the login id.
+Let's say you want to always generate login ID in the format `<username>@contoso.com`. There is a source attribute called **UserID** and you want that value to be used for the `<username>` portion of the login ID.
Then in this case, you can use the following expression in your attribute mapping. `Replace([UserID],"<username>", , , , , "<username>@contoso.com")`
Then in this case, you can use the following expression in your attribute mappin
**Example 3:** Using **regexPattern** and **replacementValue** to extract a portion of the source string and replace it with an empty string or a custom value built using regex patterns or regex group names.
-Let's say you have a source attribute `telephoneNumber` that has components `country code` and `phone number` separated by a space character. E.g. `+91 9998887777`
+Let's say you have a source attribute `telephoneNumber` that has components `country code` and `phone number` separated by a space character. For example, `+91 9998887777`
Then in this case, you can use the following expression in your attribute mapping to extract the 10 digit phone number. `Replace([telephoneNumber], , "\\+(?<isdCode>\\d* )(?<phoneNumber>\\d{10})", , "${phoneNumber}", , )`
For example, the expression below removes parenthesis, dashes and space characte
**Example 4:** Using **regexPattern**, **regexGroupName** and **replacementValue** to extract a portion of the source string and replace it with another literal value or empty string.
-Let's say your source system has an attribute AddressLineData with two components street number and street name. As part of a recent move, let's say the street number of the address changed and you want to update only the street number portion of the address line.
+Let's say your source system has an attribute AddressLineData with two components street number and street name. As part of a recent move, let's say the street number of the address changed, and you want to update only the street number portion of the address line.
Then in this case, you can use the following expression in your attribute mapping to extract the street number. `Replace([AddressLineData], ,"(?<streetNumber>^\\d*)","streetNumber", "888", , )`
Then in this case, you can use the following expression in your attribute mappin
* **replacementValue:** "888" * **Expression output:** 888 Tremont Street
-Here is another example where the domain suffix from a UPN is replaced with an empty string to generate login id without domain suffix.
+Here is another example where the domain suffix from a UPN is replaced with an empty string to generate login ID without domain suffix.
`Replace([userPrincipalName], , "(?<Suffix>@(.)*)", "Suffix", "", , )`
Then in this case, you can use the following expression in your attribute mappin
SelectUniqueValue(uniqueValueRule1, uniqueValueRule2, uniqueValueRule3, …) **Description:**
-Requires a minimum of two arguments, which are unique value generation rules defined using expressions. The function evaluates each rule and then checks the value generated for uniqueness in the target app/directory. The first unique value found will be the one returned. If all of the values already exist in the target, the entry will get escrowed and the reason gets logged in the audit logs. There is no upper bound to the number of arguments that can be provided.
+Requires a minimum of two arguments, which are unique value generation rules defined using expressions. The function evaluates each rule and then checks the value generated for uniqueness in the target app/directory. The first unique value found will be the one returned. If all of the values already exist in the target, the entry will get escrowed, and the reason gets logged in the audit logs. There is no upper bound to the number of arguments that can be provided.
- This function must be at the top-level and cannot be nested. - This function cannot be applied to attributes that have a matching precedence. - This function is only meant to be used for entry creations. When using it with an attribute, set the **Apply Mapping** property to **Only during object creation**. - This function is currently only supported for "Workday to Active Directory User Provisioning" and "SuccessFactors to Active Directory User Provisioning". It cannot be used with other provisioning applications.
+ - The LDAP search that *SelectUniqueValue* function performs in on-premises Active Directory doesn't escape special characters like diacritics. If you pass a string like "Jéssica Smith" that contains a special character, you will encounter processing errors. Nest the [NormalizeDiacritics](#normalizediacritics) function as shown in the example below to normalize special characters.
**Parameters:**
Example: Based on the user's first name, middle name and last name, you need to
SingleAppRoleAssignment([appRoleAssignments]) **Description:**
-Returns a single appRoleAssignment from the list of all appRoleAssignments assigned to a user for a given application. This function is required to convert the appRoleAssignments object into a single role name string. The best practice is to ensure only one appRoleAssignment is assigned to one user at a time. This function is not supported in scenarios where users have multiple app role assignments.
+Returns a single appRoleAssignment from the list of all appRoleAssignments assigned to a user for a given application. This function is required to convert the appRoleAssignments object into a single role name string. The best practice is to ensure only one appRoleAssignment is assigned to one user at a time. This function isn't supported in scenarios where users have multiple app role assignments.
**Parameters:**
Removes all space (" ") characters from the source string.
Switch(source, defaultValue, key1, value1, key2, value2, …) **Description:**
-When **source** value matches a **key**, returns **value** for that **key**. If **source** value doesn't match any keys, returns **defaultValue**. **Key** and **value** parameters must always come in pairs. The function always expects an even number of parameters. The function should not be used for referential attributes such as manager.
+When **source** value matches a **key**, returns **value** for that **key**. If **source** value doesn't match any keys, returns **defaultValue**. **Key** and **value** parameters must always come in pairs. The function always expects an even number of parameters. The function shouldn't be used for referential attributes such as manager.
> [!NOTE] > Switch function performs a case-sensitive string comparison of the **source** and **key** values. If you'd like to perform a case-insensitive comparison, normalize the **source** string before comparison using a nested ToLower function and ensure that all **key** strings use lowercase. > Example: `Switch(ToLower([statusFlag]), "0", "true", "1", "false", "0")`. In this example, the **source** attribute `statusFlag` may have values ("True" / "true" / "TRUE"). However, the Switch function will always convert it to lowercase string "true" before comparison with **key** parameters.
+> [!CAUTION]
+> For the **source** parameter, do not use the nested functions IsPresent, IsNull or IsNullOrEmpty. Instead use a literal empty string as one of the key values.
+> Example: `Switch([statusFlag], "Default Value", "true", "1", "", "0")`. In this example, if the **source** attribute `statusFlag` is empty, the Switch function will return the value 0.
+ **Parameters:** | Name | Required/ Repeating | Type | Notes |
ToLower(source, culture)
**Description:** Takes a *source* string value and converts it to lower case using the culture rules that are specified. If there is no *culture* info specified, then it will use Invariant culture.
-If you would like to set existing values in the target system to lower case, [update the schema for your target application](./customize-application-attributes.md#editing-the-list-of-supported-attributes) and set the property caseExact to 'true' for the attribute that you are interested in.
+If you would like to set existing values in the target system to lower case, [update the schema for your target application](./customize-application-attributes.md#editing-the-list-of-supported-attributes) and set the property caseExact to 'true' for the attribute that you're interested in.
**Parameters:** | Name | Required/ Repeating | Type | Notes | | | | | | | **source** |Required |String |Usually name of the attribute from the source object |
-| **culture** |Optional |String |The format for the culture name based on RFC 4646 is *languagecode2-country/regioncode2*, where *languagecode2* is the two-letter language code and *country/regioncode2* is the two-letter subculture code. Examples include ja-JP for Japanese (Japan) and en-US for English (United States). In cases where a two-letter language code is not available, a three-letter code derived from ISO 639-2 is used.|
+| **culture** |Optional |String |The format for the culture name based on RFC 4646 is *languagecode2-country/regioncode2*, where *languagecode2* is the two-letter language code and *country/regioncode2* is the two-letter subculture code. Examples include ja-JP for Japanese (Japan) and en-US for English (United States). In cases where a two-letter language code isn't available, a three-letter code derived from ISO 639-2 is used.|
#### Convert generated userPrincipalName (UPN) value to lower case Example: You would like to generate the UPN value by concatenating the PreferredFirstName and PreferredLastName source fields and converting all characters to lower case.
ToUpper(source, culture)
**Description:** Takes a *source* string value and converts it to upper case using the culture rules that are specified. If there is no *culture* info specified, then it will use Invariant culture.
-If you would like to set existing values in the target system to upper case, [update the schema for your target application](./customize-application-attributes.md#editing-the-list-of-supported-attributes) and set the property caseExact to 'true' for the attribute that you are interested in.
+If you would like to set existing values in the target system to upper case, [update the schema for your target application](./customize-application-attributes.md#editing-the-list-of-supported-attributes) and set the property caseExact to 'true' for the attribute that you're interested in.
**Parameters:** | Name | Required/ Repeating | Type | Notes | | | | | | | **source** |Required |String |Usually name of the attribute from the source object. |
-| **culture** |Optional |String |The format for the culture name based on RFC 4646 is *languagecode2-country/regioncode2*, where *languagecode2* is the two-letter language code and *country/regioncode2* is the two-letter subculture code. Examples include ja-JP for Japanese (Japan) and en-US for English (United States). In cases where a two-letter language code is not available, a three-letter code derived from ISO 639-2 is used.|
+| **culture** |Optional |String |The format for the culture name based on RFC 4646 is *languagecode2-country/regioncode2*, where *languagecode2* is the two-letter language code and *country/regioncode2* is the two-letter subculture code. Examples include ja-JP for Japanese (Japan) and en-US for English (United States). In cases where a two-letter language code isn't available, a three-letter code derived from ISO 639-2 is used.|
### Word
The Word function returns a word contained within a string, based on parameters
If number < 1, returns empty string. If string is null, returns empty string.
-If string contains less than number words, or string does not contain any words identified by delimiters, an empty string is returned.
+If string contains less than number words, or string doesn't contain any words identified by delimiters, an empty string is returned.
**Parameters:**
active-directory Howto Mfa Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-getstarted.md
If your organization uses [Azure AD Identity Protection](../identity-protection/
Risk policies include: - [Require all users to register for Azure AD Multi-Factor Authentication](../identity-protection/howto-identity-protection-configure-mfa-policy.md)-- [Require a password change for users that are high-risk](../identity-protection/howto-identity-protection-configure-risk-policies.md#user-risk-with-conditional-access)-- [Require MFA for users with medium or high sign in risk](../identity-protection/howto-identity-protection-configure-risk-policies.md#sign-in-risk-with-conditional-access)
+- [Require a password change for users that are high-risk](../identity-protection/howto-identity-protection-configure-risk-policies.md#user-risk-policy-in-conditional-access)
+- [Require MFA for users with medium or high sign in risk](../identity-protection/howto-identity-protection-configure-risk-policies.md#sign-in-risk-policy-in-conditional-access)
### Convert users from per-user MFA to Conditional Access based MFA
active-directory All Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/all-reports.md
description: View a list and description of all system reports available in Perm
-++ Last updated 02/23/2022
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/faqs.md
description: Frequently asked questions (FAQs) about Permissions Management.
-++ Last updated 04/20/2022
active-directory How To Add Remove Role Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-add-remove-role-task.md
description: How to attach and detach permissions for groups, users, and service
-++ Last updated 02/23/2022
active-directory How To Attach Detach Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-attach-detach-permissions.md
description: How to attach and detach permissions for users, roles, and groups f
-++ Last updated 02/23/2022
active-directory How To Audit Trail Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-audit-trail-results.md
description: How to generate an on-demand report from a query in the **Audit** d
-++ Last updated 02/23/2022
active-directory How To Clone Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-clone-role-policy.md
description: How to clone a role/policy in the Just Enough Permissions (JEP) Con
-++ Last updated 02/23/2022
active-directory How To Create Alert Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-alert-trigger.md
description: How to create and view activity alerts and alert triggers in Permis
-++ Last updated 02/23/2022
active-directory How To Create Approve Privilege Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-approve-privilege-request.md
description: How to create or approve a request for permissions in the Remediati
-++ Last updated 02/23/2022
active-directory How To Create Custom Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-custom-queries.md
description: How to create a custom query in the Audit dashboard in Permissions
-++ Last updated 02/23/2022
active-directory How To Create Group Based Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-group-based-permissions.md
description: How to select group-based permissions settings in Permissions Manag
-++ Last updated 02/23/2022
active-directory How To Create Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-role-policy.md
description: How to create a role/policy in the Remediation dashboard in Permiss
-++ Last updated 02/23/2022
active-directory How To Create Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-rule.md
description: How to create a rule in the Autopilot dashboard in Permissions Mana
-++ Last updated 02/23/2022
active-directory How To Delete Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-delete-role-policy.md
description: How to delete a role/policy in the Just Enough Permissions (JEP) Co
-++ Last updated 02/23/2022
active-directory How To Modify Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-modify-role-policy.md
description: How to modify a role/policy in the Remediation dashboard in Permiss
-++ Last updated 02/23/2022
active-directory How To Notifications Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-notifications-rule.md
description: How to view notification settings for a rule in the Autopilot dash
-++ Last updated 02/23/2022
active-directory How To Recommendations Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-recommendations-rule.md
description: How to generate, view, and apply rule recommendations in the Autopi
-++ Last updated 02/23/2022
active-directory How To Revoke Task Readonly Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-revoke-task-readonly-status.md
description: How to revoke access to high-risk and unused tasks or assign read-o
-++ Last updated 02/23/2022
active-directory How To View Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-view-role-policy.md
description: How to view and filter information about roles/ policies in the Rem
-++ Last updated 02/23/2022
active-directory Integration Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/integration-api.md
description: How to view the Permissions Management API integration settings and
-++ Last updated 02/23/2022
active-directory Multi Cloud Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/multi-cloud-glossary.md
description: Permissions Management glossary
-++ Last updated 02/23/2022
active-directory Onboard Add Account After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-add-account-after-onboarding.md
description: How to add an account/ subscription/ project to Permissions Managem
-++ Last updated 02/23/2022
active-directory Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-aws.md
description: How to onboard an Amazon Web Services (AWS) account on Permissions
-++ Last updated 04/20/2022
active-directory Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-azure.md
description: How to a Microsoft Azure subscription on Permissions Management.
-++ Last updated 04/20/2022
active-directory Onboard Enable Controller After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-controller-after-onboarding.md
description: How to enable or disable the controller in Permissions Management a
-++ Last updated 02/23/2022
active-directory Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-tenant.md
description: How to enable Permissions Management in your organization.
-++ Last updated 04/20/2022
active-directory Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md
description: How to onboard a Google Cloud Platform (GCP) project on Permissions
-++ Last updated 04/20/2022
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/overview.md
description: An introduction to Permissions Management.
-++ Last updated 04/20/2022
active-directory Permissions Management Trial User Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/permissions-management-trial-user-guide.md
description: How to get started with your Entra Permissions free trial
-++ Last updated 09/01/2022
active-directory Product Account Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-account-explorer.md
Title: View roles and identities that can access account information from an ext
description: How to view information about identities that can access accounts from an external account in Permissions Management. -++ Last updated 02/23/2022
active-directory Product Account Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-account-settings.md
Title: View personal and organization information in Permissions Management
description: How to view personal and organization information in the Account settings dashboard in Permissions Management. -++ Last updated 02/23/2022
active-directory Product Audit Trail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-audit-trail.md
description: How to filter and query user activity in Permissions Management.
-++ Last updated 02/23/2022
active-directory Product Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-dashboard.md
description: How to view data about the activity in your authorization system in
-++ Last updated 02/23/2022
active-directory Product Data Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-data-inventory.md
description: How to display an inventory of created resources and licenses for y
-++ Last updated 02/23/2022
active-directory Product Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-data-sources.md
description: How to view and configure settings for collecting data from your au
-++ Last updated 02/23/2022
active-directory Product Define Permission Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-define-permission-levels.md
description: How to define and manage users, roles, and access levels in Permiss
-++ Last updated 02/23/2022
active-directory Product Permission Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-permission-analytics.md
description: How to create and view permission analytics triggers in the Permiss
-++ Last updated 02/23/2022
active-directory Product Permissions Analytics Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-permissions-analytics-reports.md
description: How to generate and download the Permissions analytics report in Pe
-++ Last updated 02/23/2022
active-directory Product Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-reports.md
description: How to view system reports in the Reports dashboard in Permissions
-++ Last updated 02/23/2022
active-directory Product Rule Based Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-rule-based-anomalies.md
description: How to create and view rule-based anomalies and anomaly triggers in
-++ Last updated 02/23/2022
active-directory Product Statistical Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-statistical-anomalies.md
description: How to create and view statistical anomalies and anomaly triggers i
-++ Last updated 02/23/2022
active-directory Report Create Custom Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/report-create-custom-report.md
description: How to create, view, and share a custom report in the Permissions M
-++ Last updated 02/23/2022
active-directory Report View System Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/report-view-system-report.md
description: How to generate and view a system report in the Permissions Managem
-++ Last updated 02/23/2022
active-directory Training Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/training-videos.md
description: Permissions Management training videos.
-++ Last updated 04/20/2022
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/troubleshoot.md
description: Troubleshoot issues with Permissions Management
-++ Last updated 02/23/2022
active-directory Ui Audit Trail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-audit-trail.md
description: How to use queries to see how users access information in an author
-++ Last updated 02/23/2022
active-directory Ui Autopilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-autopilot.md
description: How to view rules in the Autopilot dashboard in Permissions Managem
-++ Last updated 02/23/2022
active-directory Ui Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-dashboard.md
description: How to view statistics and data about your authorization system in
-++ Last updated 02/23/2022
active-directory Ui Remediation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-remediation.md
description: How to view existing roles/policies and requests for permission in
-++ Last updated 02/23/2022
active-directory Ui Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-tasks.md
description: How to view information about active and completed tasks in the Act
-++ Last updated 02/23/2022
active-directory Ui Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-triggers.md
description: How to view information about activity triggers in the Activity tri
-++ Last updated 02/23/2022
active-directory Ui User Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-user-management.md
description: How to manage users and groups in the User management dashboard in
-++ Last updated 02/23/2022
active-directory Usage Analytics Access Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-access-keys.md
description: How to view analytic information about access keys in Permissions
-++ Last updated 02/23/2022
active-directory Usage Analytics Active Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-active-resources.md
description: How to view usage analytics about active resources in Permissions M
-++ Last updated 02/23/2022
active-directory Usage Analytics Active Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-active-tasks.md
description: How to view analytic information about active tasks in Permissions
-++ Last updated 02/23/2022
active-directory Usage Analytics Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-groups.md
description: How to view analytic information about groups in Permissions Manage
-++ Last updated 02/23/2022
active-directory Usage Analytics Home https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-home.md
description: How to use the Analytics dashboard in Permissions Management to vie
-++ Last updated 02/23/2022
active-directory Usage Analytics Serverless Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-serverless-functions.md
description: How to view analytic information about serverless functions in Perm
-++ Last updated 02/23/2022
active-directory Usage Analytics Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-users.md
description: How to view analytic information about users in Permissions Managem
-++ Last updated 02/23/2022
active-directory Msal Js Prompt Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-prompt-behavior.md
In some cases however, the prompt value `none` can be used together with an inte
- [Single sign-on with MSAL.js](msal-js-sso.md) - [Handle errors and exceptions in MSAL.js](msal-error-handling-js.md) - [Handle ITP in Safari and other browsers where third-party cookies are blocked](reference-third-party-cookies-spas.md)-- [OAuth 2.0 authorization code flow](v2-oauth2-auth-code-flow.md)
+- [OAuth 2.0 authorization code flow on the Microsoft identity platform](v2-oauth2-auth-code-flow.md)
+- [OpenID Connect on the Microsoft identity platform](v2-protocols-oidc.md)
active-directory Workload Identity Federation Create Trust User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-user-assigned-managed-identity.md
For a workflow triggered by a pull request event, specify an **Entity type** of
Fill in the **Cluster issuer URL**, **Namespace**, **Service account name**, and **Name** fields: -- **Cluster issuer URL** is the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer-preview) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.
+- **Cluster issuer URL** is the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.
- **Service account name** is the name of the Kubernetes service account, which provides an identity for processes that run in a Pod. - **Namespace** is the service account namespace. - **Name** is the name of the federated credential, which can't be changed later.
https://management.azure.com/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RES
## Next steps -- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
+- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust.md
Select the **Kubernetes accessing Azure resources** scenario from the dropdown m
Fill in the **Cluster issuer URL**, **Namespace**, **Service account name**, and **Name** fields: -- **Cluster issuer URL** is the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer-preview) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.
+- **Cluster issuer URL** is the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster.
- **Service account name** is the name of the Kubernetes service account, which provides an identity for processes that run in a Pod. - **Namespace** is the service account namespace. - **Name** is the name of the federated credential, which can't be changed later.
az ad app federated-credential create --id f6475511-fd81-4965-a00e-41e7792b7b9c
### Kubernetes example
-*issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer-preview) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
+*issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
*subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`.
New-AzADAppFederatedCredential -ApplicationObjectId $appObjectId -Audience api:/
### Kubernetes example - *ApplicationObjectId*: the object ID of the app (not the application (client) ID) you previously registered in Azure AD.-- *Issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer-preview) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
+- *Issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
- *Subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`. - *Name* is the name of the federated credential, which can't be changed later. - *Audience* lists the audiences that can appear in the `aud` claim of the external token.
And you get the response:
Run the following method to configure a federated identity credential on an app and create a trust relationship with a Kubernetes service account. Specify the following parameters: -- *issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer-preview) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
+- *issuer* is your service account issuer URL (the [OIDC issuer URL](../../aks/cluster-configuration.md#oidc-issuer) for the managed cluster or the [OIDC Issuer URL](https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/oidc-issuer.html) for a self-managed cluster).
- *subject* is the subject name in the tokens issued to the service account. Kubernetes uses the following format for subject names: `system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>`. - *name* is the name of the federated credential, which can't be changed later. - *audiences* lists the audiences that can appear in the external token. This field is mandatory. The recommended value is "api://AzureADTokenExchange".
az rest -m DELETE -u 'https://graph.microsoft.com/applications/f6475511-fd81-49
- To learn how to use workload identity federation for GitHub Actions, see [Configure a GitHub Actions workflow to get an access token](/azure/developer/github/connect-from-azure). - Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure resources. - For more information, read about how Azure AD uses the [OAuth 2.0 client credentials grant](v2-oauth2-client-creds-grant-flow.md#third-case-access-token-request-with-a-federated-credential) and a client assertion issued by another IdP to get a token.-- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
+- For information about the required format of JWTs created by external identity providers, read about the [assertion format](active-directory-certificate-credentials.md#assertion-format).
active-directory Create Access Review Privileged Access Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review-privileged-access-groups.md
+
+ Title: Create an access review of Privileged Access Groups - Azure AD (preview)
+description: Learn how to create an access review of Privileged Access Groups in Azure Active Directory.
+++
+editor: markwahl-msft
++
+ na
++ Last updated : 09/14/2022++++
+
+# Create an access review of Privileged Access Groups in Azure AD (preview)
+
+This article describes how to create one or more access reviews for Privileged Access Groups, which will include the active members of the group as well as the eligible members. Reviews can be performed on both active members of the group, who are active at the time the review is created, and the eligible members of the group.
+
+## Prerequisites
+
+- Azure AD Premium P2.
+- Only Global administrators and Privileged Role administrators can create reviews on Privileged Access Groups. For more information, see [Use Azure AD groups to manage role assignments](../roles/groups-concept.md).
+
+For more information, see [License requirements](access-reviews-overview.md#license-requirements).
+
+## Create a Privileged Access Group access review
+
+### Scope
+1. Sign in to the Azure portal and open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page.
+
+2. On the left menu, select **Access reviews**.
+
+3. Select **New access review** to create a new access review.
+
+ ![Screenshot that shows the Access reviews pane in Identity Governance.](./media/create-access-review/access-reviews.png)
+
+4. In the **Select what to review** box, select **Teams + Groups**.
+
+ ![Screenshot that shows creating an access review.](./media/create-access-review/select-what-review.png)
+
+5. Select **Teams + Groups** and then select **Select Teams + groups** under **Review Scope**. A list of groups to choose from appears on the right.
+
+ ![Screenshot that shows selecting Teams + Groups.](./media/create-access-review/create-privileged-access-groups-review.png)
+
+> [!NOTE]
+> When a Privileged Access Group (PAG) is selected, the users under review for the group will include all eligible users and active users in that group.
+
+6. Now you can select a scope for the review. Your options are:
+ - **Guest users only**: This option limits the access review to only the Azure AD B2B guest users in your directory.
+ - **Everyone**: This option scopes the access review to all user objects associated with the resource.
++
+7. If you are conducting group membership review, you can create access reviews for only the inactive users in the group. In the *Users scope* section, check the box next to **Inactive users (on tenant level)**. If you check the box, the scope of the review will focus on inactive users only, those who have not signed in either interactively or non-interactively to the tenant. Then, specify **Days inactive** with a number of days inactive up to 730 days (two years). Users in the group inactive for the specified number of days will be the only users in the review.
+
+> [!NOTE]
+> Recently created users are not affected when configuring the inactivity time. The Access Review will check if a user has been created in the time frame configured and disregard users who havenΓÇÖt existed for at least that amount of time. For example, if you set the inactivity time as 90 days and a guest user was created or invited less than 90 days ago, the guest user will not be in scope of the Access Review. This ensures that a user can sign in at least once before being removed.
+
+8. Select **Next: Reviews**.
+
+After you have reached this step, you may follow the instructions outlined under **Next: Reviews** in the [Create an access review of groups or applications](create-access-review.md#next-reviews) article to complete your access review.
+
+> [!NOTE]
+> Review of Privileged Access Groups will only assign active owner(s) as the reviewers. Eligible owners are not included. At least one fallback reviewer is required for a Privileged Access Groups review. If there are no active owner(s) when the review begins, the fallback reviewer(s) will be assigned to the review.
+
+## Next steps
+
+- [Create an access review of groups or applications](create-access-review.md)
+- [Approve activation requests for privileged access group members and owners (preview)](../privileged-identity-management/groups-approval-workflow.md)
active-directory Create Access Review https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md
na Previously updated : 08/24/2022 Last updated : 09/09/2022
If you are reviewing access to an application, then before creating the review,
If you choose either **Managers of users** or **Group owner(s)**, you can also specify a fallback reviewer. Fallback reviewers are asked to do a review when the user has no manager specified in the directory or if the group doesn't have an owner.
+ >[!IMPORTANT]
+ > For Privileged Access Groups (Preview), you must select **Group owner(s)**. It is mandatory to assign at least one fallback reviewer to the review. The review will only assign active owner(s) as the reviewer(s). Eligible owners are not included. If there are no active owners when the review begins, the fallback reviewer(s) will be assigned to the review.
+ ![Screenshot that shows New access review.](./media/create-access-review/new-access-review.png) 1. In the **Specify recurrence of review** section, specify the following selections:
After one or more access reviews have started, you might want to modify or updat
## Next steps
+- [Complete an access review of groups or applications](complete-access-review.md)
+- [Create an access review of Privileged Access Groups (preview)](create-access-review-privileged-access-groups.md)
- [Review access to groups or applications](perform-access-review.md) - [Review access for yourself to groups or applications](review-your-access.md)-- [Complete an access review of groups or applications](complete-access-review.md)+
active-directory Entitlement Management Access Package Approval Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-approval-policy.md
Use the following steps to add approvers after selecting how many stages you req
![Access package - Requests - For users out of directory - First Approver](./media/entitlement-management-access-package-approval-policy/out-directory-first-approver.png)
-1. If you selected **Manager** as the first approver, select **Add fallback** to select one or more users or groups in your directory to be a fallback approver. Fallback approvers receive the request if entitlement management can't find the manager for the user requesting access.
+1. If you selected **Manager** as the first approver, select **Add fallback to select one, or more users or groups in your directory to be a fallback approver. Fallback approvers receive the request if entitlement management can't find the manager for the user requesting access.
The manager is found by entitlement management using the **Manager** attribute. The attribute is in the user's profile in Azure AD. For more information, see [Add or update a user's profile information using Azure Active Directory](../fundamentals/active-directory-users-profile-azure-portal.md).
-1. If you selected **Choose specific approvers**, select **Add approvers** to choose one or more users or groups in your directory to be approvers.
+1. If you selected **Choose specific approvers**, select **Add approvers** to choose one, or more, users or groups in your directory to be approvers.
1. In the box under **Decision must be made in how many days?**, specify the number of days that an approver has to review a request for this access package.
For example, if you listed Alice and Bob as the first stage approver(s), list Ca
1. In the **Forward to alternate approver(s) after how many days** box, put in the number of days the approvers have to approve or deny a request. If no approvers have approved or denied the request before the request duration, the request expires (timeout), and the user will have to submit another request for the access package.
- Requests can only be forwarded to alternate approvers a day after the request duration reaches half-life, and the decision of the main approver(s) has to time-out after at least four days. If the request time-out is less or equal than three, there isn't enough time to forward the request to alternate approver(s). In this example, the duration of the request is 14 days. So, the request duration reaches half-life at day 7. So the request can't be forwarded earlier than day 8. Also, requests can't be forwarded on the last day of the request duration. So in the example, the latest the request can be forwarded is day 13.
+ Requests can only be forwarded to alternate approvers a day after the request duration reaches half-life, and the decision of the main approver(s) has to time out after at least four days. If the request time-out is less or equal than three, there isn't enough time to forward the request to alternate approver(s). In this example, the duration of the request is 14 days. So, the request duration reaches half-life at day 7. So the request can't be forwarded earlier than day 8. Also, requests can't be forwarded on the last day of the request duration. So in the example, the latest the request can be forwarded is day 13.
## Enable requests
For example, if you listed Alice and Bob as the first stage approver(s), list Ca
## Collect additional requestor information for approval
-In order to make sure users are getting access to the right access packages, you can require requestors to answer custom text field or multiple choice questions at the time of request. There's a limit of 20 questions per policy and a limit of 25 answers for multiple choice questions. The questions will then be shown to approvers to help them make a decision.
+In order to make sure users are getting access to the right access packages, you can require requestors to answer custom text field or Multiple Choice questions at the time of request. There's a limit of 20 questions per policy and a limit of 25 answers for Multiple Choice questions. The questions will then be shown to approvers to help them make a decision.
1. Go to the **Requestor information** tab and select the **Questions** sub tab.
In order to make sure users are getting access to the right access packages, you
![Access package - Policy- Configure localized text](./media/entitlement-management-access-package-approval-policy/add-localization-question.png)
-1. Select the **Answer format** in which you would like requestors to answer. Answer formats include: *short text*, *multiple choice*, and *long text*.
+1. Select the **Answer format** in which you would like requestors to answer. Answer formats include: *short text*, *Multiple Choice*, and *long text*.
![Access package - Policy- Select Edit and localize multiple choice answer format](./media/entitlement-management-access-package-approval-policy/answer-format-view-edit.png)
-1. If selecting multiple choice, select on the **Edit and localize** button to configure the answer options.
+1. If selecting Multiple Choice, select on the **Edit and localize** button to configure the answer options.
1. After selecting Edit and localize the **View/edit question** pane will open. 1. Type in the response options you wish to give the requestor when answering the question in the **Answer values** boxes. 1. Type in as many responses as you need.
- 1. If you would like to add your own localization for the multiple choice options, select the **Optional language code** for the language in which you want to localize a specific option.
+ 1. If you would like to add your own localization for the Multiple Choice options, select the **Optional language code** for the language in which you want to localize a specific option.
1. In the language you configured, type the option in the Localized text box.
- 1. Once you've added all of the localizations needed for each multiple choice option, select **Save**.
+ 1. Once you've added all of the localizations needed for each Multiple Choice option, select **Save**.
![Access package - Policy- Enter multiple choice options](./media/entitlement-management-access-package-approval-policy/answer-multiple-choice.png)
-
+
+1. If you would like to include a syntax check for text answers to questions, you can also specify a custom regex pattern.
+ :::image type="content" source="media/entitlement-management-access-package-approval-policy/add-regex-localization.png" alt-text="Screenshot of the add regex localization policy." lightbox="media/entitlement-management-access-package-approval-policy/add-regex-localization.png":::
1. To require requestors to answer this question when requesting access to an access package, select the check box under **Required**. 1. Fill out the remaining tabs (for example, Lifecycle) based on your needs.
active-directory How To Bypassdirsyncoverrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-bypassdirsyncoverrides.md
+
+ Title: How to use the BypassDirSyncOverrides feature of an Azure AD tenant
+description: Describes how to use bypassdirsyncoverrides tenant feature to restore synchronization of Mobile and OtherMobile attributes from on-premises Active Directory.
++ Last updated : 08/11/2022+++++++
+# How to use the BypassDirSyncOverrides feature of an Azure AD tenant.
+
+This article describes the _BypassDirsyncOverrides_ΓÇ» feature and how to restore synchronization of Mobile and otherMobile attributes from Azure AD to on-premises Active Directory.
+
+Generally, synchronized users cannot be changed from Azure or Microsoft 365 admin portals, neither through PowerShell using AzureAD or MSOnline modules. The exception to this is the Azure AD user’s attributes called _MobilePhone_ and _AlternateMobilePhones_. These attributes are synchronized from on-premises Active Directory attributes mobile and otherMobile, respectively, but end users can update their own phone number in _MobilePhone_ attribute in Azure AD through their profile page. Admins can also update synchronized user’s _MobilePhone_ and _AlternateMobilePhones_ values in Azure AD using MSOnline PowerShell module.
+
+Giving users and admins the ability to update phone numbers directly in Azure AD enables enterprises to reduce the administrative overhead of managing userΓÇÖs phone numbers in local Active Directory as these can change more frequently.
+
+The caveat however, is that once a synchronized user's _MobilePhone_ or _AlternateMobilePhones_ number is updated via admin portal or PowerShell, the synchronization API will no longer honor updates to these attributes when they originate from on-premises Active Directory. This is commonly known as a _“DirSyncOverrides”_ feature. Administrators will notice this behavior when updates to Mobile or otherMobile attributes in Active Directory, do not update the correspondent user’s MobilePhone or AlternateMobilePhones in Azure AD accordingly, even though, the object is successfully synchronized through Azure AD Connect's engine.
+
+## Identifying users with different Mobile and otherMobile values
+
+You can export a list of users with different Mobile and otherMobile values between Active Directory and Azure Active Directory using _‘Compare-ADSyncToolsDirSyncOverrides’_ from _ADSyncTools_ PowerShell module. This will allow you to determine the users and respective values that are different between on-premises Active Directory and Azure Active Directory. This is important to know because enabling the _BypassDirSyncOverrides_ feature will overwrite all the different values in Azure Active Directory with the value coming from on-premises Active Directory.
+
+### Using Compare-ADSyncToolsDirSyncOverrides
+
+As a prerequisite you need to be running Azure AD Connect version 2 or later and install the latest ADSyncTools module from PowerShell Gallery with the following command:
+
+```powershell
+Install-Module ADSyncTools
+```
+
+To compare all the synchronized userΓÇÖs Mobile and OtherMobile values, run the following command:
+
+```powershell
+Compare-ADSyncToolsDirSyncOverrides -Credential $(Get-Credential)
+```
+
+>[!NOTE]
+> The target API used by this feature does not handle authentication user interactions. MFA or conditional policies will block authentication. When prompted to enter credentials, please use a Global Administrator account that doesn't have MFA enabled or any conditional access policy applied. As a last resort, please create a temporary Global Administrator user account without MFA or Conditional Access that can be deleted after completing the desired operations using the BypassDirSyncOverridees feature.
+
+This function will export a CSV file with a list of users where Mobile or OtherMobile values in on-premises Active Directory are different than the respective MobilePhone or AlternateMobilePhones in Azure AD.
+
+At this stage you can use this data to reset the values of the on-premises Active Directory _Mobile_ and _otherMobile_ properties to the values that are present in Azure Active Directory. This way you can capture the most updated phone numbers from Azure AD and persist this data in on-premises Active Directory, before enabling _BypassDirSyncOverrides_ feature. To do this, import the data from the resulting CSV file and then use the _'Set-ADSyncToolsDirSyncOverrides'_ from _ADSyncTools_ module to persist the value in on-premises Active Directory.
+
+For example, to import data from the CSV file and extract the values in Azure AD for a given UserPrincipalName, use the following command:
+
+```powershell
+$upn = '<UserPrincipalName>'
+$user = Import-Csv 'ADSyncTools-DirSyncOverrides_yyyyMMMdd-HHmmss.csv' |
+where UserPrincipalName -eq $upn |
+select UserPrincipalName,*InAAD
+Set-ADSyncToolsDirSyncOverridesUser -Identity $upn -MobileInAD $user.MobileInAAD
+```
+
+## Enabling BypassDirSyncOverrides feature
+
+By default, _BypassDirSyncOverrides_ feature is turned off. Enabling _BypassDirSyncOverrides_ allows your tenant to bypass any changes made in _MobilePhone_ or _AlternateMobilePhones_ by users or admins directly in Azure AD and always honor the values present in on-premises Active Directory _Mobile_ or _OtherMobile_.
+
+If you do not wish to have end users updating their own mobile phone number or there is no requirement to have admins updating mobile or alternative mobile phone numbers using PowerShell, you should leave the feature _BypassDirsyncOverrides_ enabled on the tenant.
+
+With this feature turned on, even if an end user or admin updates either _MobilePhone_ or _AlternateMobilePhones_ in Azure Active Directory, the values synchronized from on-premises Active Directory will persist upon the next sync cycle. This means that any updates to these values only persist when the update is performed in on-premises Active Directory and then synchronized to Azure Active Directory.
+
+### Enable the _BypassDirSyncOverrides_ feature:
+
+To enable BypassDirSyncOverrides  feature use the MSOnline PowerShell module.
+
+```powershell
+Set-MsolDirSyncFeature -Feature BypassdirSyncOverrides -Enable $true
+```
+
+Once the feature is enabled, start a full synchronization cycle in Azure AD Connect using the following command:
+
+```powershell
+Start-ADSyncSyncCycle -PolicyType Initial
+```
+
+[!NOTE] Only objects with a different _MobilePhone_ or _AlternateMobilePhones_ value from on-premises Active Directory will be updated.
+
+### Verify the status of the _BypassDirSyncOverrides_ feature:
+
+```powershell
+Get-MsolDirSyncFeatures -Feature BypassdirSyncOverrides
+```
+
+## Disabling _BypassDirSyncOverrides_ feature
+
+If you desire to restore the ability to update mobile phone numbers from the portal or PowerShell, you can disable _BypassDirSyncOverrides_ feature using the following Microsoft Online PowerShell module command:
+
+```powershell
+Set-MsolDirSyncFeature -Feature BypassdirSyncOverrides -Enable $false
+```
+
+When this feature is turned off, anytime a user or admin updates the _MobilePhone_ or _AlternateMobilePhones_ directly in Azure AD, a _DirSyncOverrides_ is created which prevents any future updates to these attributes coming from on-premises Active Directory. From this point on, a user or admin can only manage these attributes from Azure AD as any new updates from on-premises _Mobile_ or _OtherMobile_ will be dismissed.
+
+## Managing mobile phone numbers in Azure AD and on-premises Active Directory
+
+To manage the userΓÇÖs phone numbers, an admin can use the following set of functions from _ADSyncTools_ module to read, write and clear the values in either Azure AD or on-premises Active Directory.
+
+### Get _Mobile_ and _OtherMobile_ properties from on-premises Active Directory:
+
+```powershell
+Get-ADSyncToolsDirSyncOverridesUser 'User1@Contoso.com' -FromAD
+```
+
+### Get _MobilePhone_ and _AlternateMobilePhones_ properties from Azure AD:
+
+```powershell
+Get-ADSyncToolsDirSyncOverridesUser 'User1@Contoso.com' -FromAzureAD
+```
+
+### Set _MobilePhone_ and _AlternateMobilePhones_ properties in Azure AD:
+
+```powershell
+Set-ADSyncToolsDirSyncOverridesUser 'User1@Contoso.com' -MobileInAD '999888777' -OtherMobileInAD '0987654','1234567'
+```
+
+### Set _Mobile_ and _otherMobile_ properties in on-premises Active Directory:
+
+```powershell
+Set-ADSyncToolsDirSyncOverridesUser 'User1@Contoso.com' -MobilePhoneInAAD '999888777' -AlternateMobilePhonesInAAD '0987654','1234567'
+```
+
+### Clear _MobilePhone_ and _AlternateMobilePhones_ properties in Azure AD:
+
+```powershell
+Clear-ADSyncToolsDirSyncOverridesUser 'User1@Contoso.com' -MobileInAD -OtherMobileInAD
+```
+
+### Clear _Mobile_ and _otherMobile_ properties in on-premises Active Directory:
+
+```powershell
+Clear-ADSyncToolsDirSyncOverridesUser 'User1@Contoso.com' -MobilePhoneInAAD -AlternateMobilePhonesInAAD
+```
+
+## Next Steps
+
+Learn more about [Azure AD Connect: ADSyncTools PowerShell Module](reference-connect-adsynctools.md)
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
If you want all the latest features and updates, check this page and install wha
To read more about auto-upgrade, see [Azure AD Connect: Automatic upgrade](how-to-connect-install-automatic-upgrade.md).
+## 2.1.18.0
+
+### Release status:
+10/5/2022: Released for download
+
+### Bug fixes
+ - we fixed a bug where upgrade from version 1.6 to version 2.1 got stuck in a loop due to IsMemberOfLocalGroup enumeration.
+ - we fixed a bug where the Azure AD Connect Configuration Wizard was sending incorrect credentials (username format) while validating if Enterprise Admin.
+ ## 2.1.16.0 ### Release status
active-directory Concept Identity Protection Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-policies.md
Title: Azure AD Identity Protection policies
-description: Identifying the three policies that are enabled with Identity Protection
+ Title: Azure AD Identity Protection risk-based access policies
+description: Identifying risk-based Conditional Access policies
Previously updated : 08/22/2022 Last updated : 10/04/2022 -+
-# Identity Protection policies
+# Risk-based access policies
-Azure Active Directory Identity Protection includes three default policies that administrators can choose to enable. These policies include limited customization but are applicable to most organizations. All of the policies allow for excluding users such as your [emergency access or break-glass administrator accounts](../roles/security-emergency-access.md).
+Access control policies can be applied to protect organizations when a sign-in or user is detected to be at risk. Such policies are called **risk-based policies**.
-![Identity Protection policies](./media/concept-identity-protection-policies/identity-protection-policies.png)
+Azure AD Conditional Access offers two risk conditions: **Sign-in risk** and **User risk**. Organizations can create risk-based Conditional Access policies by configuring these two risk conditions and choosing an access control method. During each sign-in, Identity Protection sends the detected risk levels to Conditional Access, and the risk-based policies will apply if the policy conditions are satisfied.
-## Azure AD MFA registration policy
+![Diagram that shows a conceptual risk-based Conditional Access policy.](./media/concept-identity-protection-policies/risk-based-conditional-access-diagram.png)
+
+For example, as shown in the diagram below, if organizations have a sign-in risk policy that requires multifactor authentication when the sign-in risk level is medium or high, their users must complete multifactor authentication when their sign-in risk is medium or high.
-Identity Protection can help organizations roll out Azure AD Multifactor Authentication (MFA) using a Conditional Access policy requiring registration at sign-in. Enabling this policy is a great way to ensure new users in your organization have registered for MFA on their first day. Multifactor authentication is one of the self-remediation methods for risk events within Identity Protection. Self-remediation allows your users to take action on their own to reduce helpdesk call volume.
+![Diagram that shows a conceptual risk-based Conditional Access policy with self-remediation.](./media/concept-identity-protection-policies/risk-based-conditional-access-policy-example.png)
-More information about Azure AD Multifactor Authentication can be found in the article, [How it works: Azure AD Multifactor Authentication](../authentication/concept-mfa-howitworks.md).
+The example above also demonstrates a main benefit of a risk-based policy: **automatic risk remediation**. When a user successfully completes the required access control, like a secure password change, their risk is remediated. That sign-in session and user account won't be at risk, and no action is needed from the administrator.
-## Sign-in risk policy
+Allowing users to self-remediate using this process will significantly reduce the risk investigation and remediation burden on the administrators while protecting your organizations from security compromises. More information about risk remediation can be found in the article, [Remediate risks and unblock users](howto-identity-protection-remediate-unblock.md).
-Identity Protection analyzes signals from each sign-in, both real-time and offline, and calculates a risk score based on the probability that the sign-in wasn't really the user. Administrators can make a decision based on this risk score signal to enforce organizational requirements like:
+## Sign-in risk-based Conditional Access policy
+
+During each sign-in, Identity Protection analyzes hundreds of signals in real-time and calculates a sign-in risk level that represents the probability that the given authentication request isn't authorized. This risk level then gets sent to Conditional Access, where the organization's configured policies are evaluated. Administrators can configure sign-in risk-based Conditional Access policies to enforce access controls based on sign-in risk, including requirements such as:
- Block access - Allow access - Require multifactor authentication
-If risk is detected, users can perform multifactor authentication to self-remediate and close the risky sign-in event to prevent unnecessary noise for administrators.
-
-> [!NOTE]
-> Users must have previously registered for Azure AD Multifactor Authentication before triggering the sign-in risk policy.
+If risks are detected on a sign-in, users can perform the required access control such as multifactor authentication to self-remediate and close the risky sign-in event to prevent unnecessary noise for administrators.
-### Custom Conditional Access policy
+![Screenshot of a sign-in risk-based Conditional Access policy.](./media/concept-identity-protection-policies/sign-in-risk-policy.png)
-Administrators can also choose to create a custom Conditional Access policy including sign-in risk as an assignment condition. More information about risk as a condition in a Conditional Access policy can be found in the article, [Conditional Access: Conditions](../conditional-access/concept-conditional-access-conditions.md#sign-in-risk)
+> [!NOTE]
+> Users must have previously registered for Azure AD multifactor authentication before triggering the sign-in risk policy.
-![Custom Conditional Access sign-in risk policy](./media/concept-identity-protection-policies/identity-protection-custom-sign-in-policy.png)
+## User risk-based Conditional Access policy
-## User risk policy
+Identity Protection analyzes signals about user accounts and calculates a risk score based on the probability that the user has been compromised. If a user has risky sign-in behavior, or their credentials have been leaked, Identity Protection will use these signals to calculate the user risk level. Administrators can configure user risk-based Conditional Access policies to enforce access controls based on user risk, including requirements such as:
-Identity Protection can calculate what it believes is normal for a user's behavior and use that to base decisions for their risk. User risk is a calculation of probability that an identity has been compromised. Administrators can make a decision based on this risk score signal to enforce organizational requirements. Administrators can choose to block access, allow access, or allow access but require a password change using [Azure AD self-service password reset](../authentication/howto-sspr-deployment.md).
+- Block access
+- Allow access but require a secure password change using [Azure AD self-service password reset](../authentication/howto-sspr-deployment.md).
-If risk is detected, users can perform self-service password reset to self-remediate and close the user risk event to prevent unnecessary noise for administrators.
+A secure password change will remediate the user risk and close the risky user event to prevent unnecessary noise for administrators.
> [!NOTE] > Users must have previously registered for self-service password reset before triggering the user risk policy.
+## Identity Protection policies
+
+While Identity Protection also offers a user interface for creating user risk policy and sign-in risk policy, we highly recommend that you [use Azure AD Conditional Access to create risk-based policies](howto-identity-protection-configure-risk-policies.md) for the following benefits:
+
+- Rich set of conditions to control access: Conditional Access offers a rich set of conditions such as applications and locations for configuration. The risk conditions can be used in combination with other conditions to create policies that best enforce your organizational requirements.
+- Multiple risk-based policies can be put in place to target different user groups or apply different access control for different risk levels.
+- Conditional Access policies can be created through Microsoft Graph API and can be tested first in report-only mode.
+- Manage all access policies in one place in Conditional Access.
+
+If you already have Identity Protection risk policies set up, we encourage you to [migrate them to Conditional Access](howto-identity-protection-configure-risk-policies.md#migrate-risk-policies-from-identity-protection-to-conditional-access).
+
+## Azure AD MFA registration policy
+
+Identity Protection can help organizations roll out Azure AD multifactor authentication (MFA) using a policy requiring registration at sign-in. Enabling this policy is a great way to ensure new users in your organization have registered for MFA on their first day. Multifactor authentication is one of the self-remediation methods for risk events within Identity Protection. Self-remediation allows your users to take action on their own to reduce helpdesk call volume.
+
+More information about Azure AD multifactor authentication can be found in the article, [How it works: Azure AD multifactor authentication](../authentication/concept-mfa-howitworks.md).
+ ## Next steps - [Enable Azure AD self-service password reset](../authentication/howto-sspr-deployment.md)-- [Enable Azure AD Multifactor Authentication](../authentication/howto-mfa-getstarted.md)-- [Enable Azure AD Multifactor Authentication registration policy](howto-identity-protection-configure-mfa-policy.md)
+- [Enable Azure AD multifactor authentication](../authentication/howto-mfa-getstarted.md)
+- [Enable Azure AD multifactor authentication registration policy](howto-identity-protection-configure-mfa-policy.md)
- [Enable sign-in and user risk policies](howto-identity-protection-configure-risk-policies.md)
active-directory Concept Identity Protection Risks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-risks.md
Last updated 08/16/2022
-+
Premium detections are visible only to Azure AD Premium P2 customers. Customers
### Risk levels
-Identity Protection categorizes risk into three tiers: low, medium, and high. When configuring [custom Identity protection policies](./concept-identity-protection-policies.md#custom-conditional-access-policy), you can also configure it to trigger upon **No risk** level. No Risk means there's no active indication that the user's identity has been compromised.
+Identity Protection categorizes risk into three tiers: low, medium, and high. When configuring [Identity protection policies](./concept-identity-protection-policies.md), you can also configure it to trigger upon **No risk** level. No Risk means there's no active indication that the user's identity has been compromised.
Microsoft doesn't provide specific details about how risk is calculated. Each level of risk brings higher confidence that the user or sign-in is compromised. For example, something like one instance of unfamiliar sign-in properties for a user might not be as threatening as leaked credentials for another user.
active-directory Concept Identity Protection User Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-user-experience.md
With Azure Active Directory Identity Protection, you can:
-* Require users to register for Azure AD Multi-Factor Authentication (MFA)
+* Require users to register for Azure AD multifactor authentication (MFA)
* Automate remediation of risky sign-ins and compromised users All of the Identity Protection policies have an impact on the sign in experience for users. Allowing users to register for and use tools like Azure AD MFA and self-service password reset can lessen the impact. These tools along with the appropriate policy choices gives users a self-remediation option when they need it.
Enabling the Identity Protection policy requiring multi-factor authentication re
![More information required](./media/concept-identity-protection-user-experience/identity-protection-experience-more-info-mfa.png)
-1. Complete the guided steps to register for Azure AD Multi-Factor Authentication and complete your sign-in.
+1. Complete the guided steps to register for Azure AD multifactor authentication and complete your sign-in.
## Risky sign-in remediation
active-directory Howto Identity Protection Configure Mfa Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-mfa-policy.md
-# How To: Configure the Azure AD Multifactor Authentication registration policy
+# How To: Configure the Azure AD multifactor authentication registration policy
-Azure Active Directory (Azure AD) Identity Protection helps you manage the roll-out of Azure AD Multifactor Authentication (MFA) registration by configuring a Conditional Access policy to require MFA registration no matter what modern authentication app you're signing in to.
+Azure Active Directory (Azure AD) Identity Protection helps you manage the roll-out of Azure AD multifactor authentication (MFA) registration by configuring a Conditional Access policy to require MFA registration no matter what modern authentication app you're signing in to.
-## What is the Azure AD Multifactor Authentication registration policy?
+## What is the Azure AD multifactor authentication registration policy?
-Azure AD Multifactor Authentication provides a means to verify who you are using more than just a username and password. It provides a second layer of security to user sign-ins. In order for users to be able to respond to MFA prompts, they must first register for Azure AD Multifactor Authentication.
+Azure AD multifactor authentication provides a means to verify who you are using more than just a username and password. It provides a second layer of security to user sign-ins. In order for users to be able to respond to MFA prompts, they must first register for Azure AD multifactor authentication.
-We recommend that you require Azure AD Multifactor Authentication for user sign-ins because it:
+We recommend that you require Azure AD multifactor authentication for user sign-ins because it:
- Delivers strong authentication through a range of verification options. - Plays a key role in preparing your organization to self-remediate from risk detections in Identity Protection.
-For more information on Azure AD Multifactor Authentication, see [What is Azure AD Multifactor Authentication?](../authentication/howto-mfa-getstarted.md)
+For more information on Azure AD multifactor authentication, see [What is Azure AD multifactor authentication?](../authentication/howto-mfa-getstarted.md)
## Policy configuration
For an overview of the related user experience, see:
- [Enable Azure AD self-service password reset](../authentication/howto-sspr-deployment.md) -- [Enable Azure AD Multifactor Authentication](../authentication/howto-mfa-getstarted.md)
+- [Enable Azure AD multifactor authentication](../authentication/howto-mfa-getstarted.md)
active-directory Howto Identity Protection Configure Risk Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md
Previously updated : 08/23/2022 Last updated : 10/04/2022 -+ # Configure and enable risk policies
-As we learned in the previous article, [Identity Protection policies](concept-identity-protection-policies.md) we have two risk policies that we can enable in our directory.
+As we learned in the previous article, [Risk-based access policies](concept-identity-protection-policies.md), there are two types of risk policies in Azure Active Directory (Azure AD) Conditional Access you can set up to automate the response to risks and allow users to self-remediate when risk is detected:
- Sign-in risk policy - User risk policy
-Both policies work to automate the response to risk detections in your environment and allow users to self-remediate when risk is detected.
+![Screenshot of a Conditional Access policy showing risk as conditions.](./media/howto-identity-protection-configure-risk-policies/sign-in-risk-conditions.png)
## Choosing acceptable risk levels
-Organizations must decide the level of risk they're willing to accept balancing user experience and security posture.
+Organizations must decide the level of risk they want to require access control on balancing user experience and security posture.
-Microsoft's recommendation is to set the user risk policy threshold to **High** and the sign-in risk policy to **Medium and above** and allow self-remediation options. Choosing to block access rather than allowing self-remediation options, like password change and multi-factor authentication, will impact your users and administrators. Weigh this choice when configuring your policies.
-
-Choosing a **High** threshold reduces the number of times a policy is triggered and minimizes the impact to users. However, it excludes **Low** and **Medium** risk detections from the policy, which may not block an attacker from exploiting a compromised identity. Selecting a **Low** threshold introduces more user interrupts.
+Choosing to apply access control on a **High** risk level reduces the number of times a policy is triggered and minimizes the impact to users. However, it excludes **Low** and **Medium** risks from the policy, which may not block an attacker from exploiting a compromised identity. Selecting a **Low** risk level to require access control introduces more user interrupts.
Configured trusted [network locations](../conditional-access/location-condition.md) are used by Identity Protection in some risk detections to reduce false positives. ### Risk remediation
-Organizations can choose to block access when risk is detected. Blocking sometimes stops legitimate users from doing what they need to. A better solution is to allow self-remediation using Azure AD Multi-Factor Authentication (MFA) and self-service password reset (SSPR).
--- When a user risk policy triggers:
- - Administrators can require a secure password reset, requiring Azure AD MFA be done before the user creates a new password with SSPR, resetting the user risk.
-- When a sign-in risk policy triggers:
- - Azure AD MFA can be triggered, allowing to user to prove it's them by using one of their registered authentication methods, resetting the sign-in risk.
+Organizations can choose to block access when risk is detected. Blocking sometimes stops legitimate users from doing what they need to. A better solution is to allow self-remediation using Azure AD multifactor authentication (MFA) and secure self-service password reset (SSPR).
> [!WARNING] > Users must register for Azure AD MFA and SSPR before they face a situation requiring remediation. Users not registered are blocked and require administrator intervention. > > Password change (I know my password and want to change it to something new) outside of the risky user policy remediation flow does not meet the requirement for secure password reset.
-## Exclusions
+### Microsoft's recommendation
-Policies allow for excluding users such as your [emergency access or break-glass administrator accounts](../roles/security-emergency-access.md). Organizations may need to exclude other accounts from specific policies based on the way the accounts are used. Exclusions should be reviewed regularly to see if they're still applicable.
+Microsoft recommends the below risk policy configurations to protect your organization:
-## Enable policies
+- User risk policy
+ - Require a secure password reset when user risk level is **High**. Azure AD MFA is required before the user can create a new password with SSPR to remediate their risk.
+- Sign-in risk policy
+ - Require Azure AD MFA when sign-in risk level is **Medium** or **High**, allowing users to prove it's them by using one of their registered authentication methods, remediating the sign-in risk.
-There are two locations where these policies may be configured, Conditional Access and Identity Protection. Configuration using Conditional Access policies is the preferred method, providing more context including:
+Requiring access control when risk level is low will introduce more user interrupts. Choosing to block access rather than allowing self-remediation options, like secure password reset and multifactor authentication, will impact your users and administrators. Weigh these choices when configuring your policies.
- - Enhanced diagnostic data
- - Report-only mode integration
- - Graph API support
- - Use more Conditional Access attributes like sign-in frequency in the policy
+## Exclusions
+
+Policies allow for excluding users such as your [emergency access or break-glass administrator accounts](../roles/security-emergency-access.md). Organizations may need to exclude other accounts from specific policies based on the way the accounts are used. Exclusions should be reviewed regularly to see if they're still applicable.
-Organizations can choose to deploy policies using the steps outlined below or using the [Conditional Access templates (Preview)](../conditional-access/concept-conditional-access-policy-common.md#conditional-access-templates-preview).
+## Enable policies
-> [!VIDEO https://www.youtube.com/embed/zEsbbik-BTE]
+Organizations can choose to deploy risk-based policies in Conditional Access using the steps outlined below or using the [Conditional Access templates (Preview)](../conditional-access/concept-conditional-access-policy-common.md#conditional-access-templates-preview).
Before organizations enable remediation policies, they may want to [investigate](howto-identity-protection-investigate-risk.md) and [remediate](howto-identity-protection-remediate-unblock.md) any active risks.
-### User risk with Conditional Access
+### User risk policy in Conditional Access
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
Before organizations enable remediation policies, they may want to [investigate]
1. Select **Done**. 1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**. 1. Under **Conditions** > **User risk**, set **Configure** to **Yes**.
- 1. Under **Configure user risk levels needed for policy to be enforced**, select **High**.
+ 1. Under **Configure user risk levels needed for policy to be enforced**, select **High**. ([This guidance is based on Microsoft recommendations and may be different for each organization](#choosing-acceptable-risk-levels))
1. Select **Done**. 1. Under **Access controls** > **Grant**. 1. Select **Grant access**, **Require password change**.
Before organizations enable remediation policies, they may want to [investigate]
1. Select **Sign-in frequency**. 1. Ensure **Every time** is selected. 1. Select **Select**.
-1. Confirm your settings, and set **Enable policy** to **On**.
+1. Confirm your settings and set **Enable policy** to **Report-only**.
1. Select **Create** to create to enable your policy.
-### Sign in risk with Conditional Access
+After confirming your settings using [report-only mode](../conditional-access/howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+### Sign-in risk policy in Conditional Access
-1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Conditional Access Administrator.
+1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator.
1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
Before organizations enable remediation policies, they may want to [investigate]
1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. 1. Select **Done**. 1. Under **Cloud apps or actions** > **Include**, select **All cloud apps**.
-1. Under **Conditions** > **Sign-in risk**, set **Configure** to **Yes**. Under **Select the sign-in risk level this policy will apply to**.
+1. Under **Conditions** > **Sign-in risk**, set **Configure** to **Yes**. Under **Select the sign-in risk level this policy will apply to**. ([This guidance is based on Microsoft recommendations and may be different for each organization](#choosing-acceptable-risk-levels))
1. Select **High** and **Medium**. 1. Select **Done**. 1. Under **Access controls** > **Grant**.
- 1. Select **Grant access**, **Require multi-factor authentication**.
+ 1. Select **Grant access**, **Require multifactor authentication**.
1. Select **Select**. 1. Under **Session**. 1. Select **Sign-in frequency**. 1. Ensure **Every time** is selected. 1. Select **Select**.
-1. Confirm your settings and set **Enable policy** to **On**.
+1. Confirm your settings and set **Enable policy** to **Report-only**.
1. Select **Create** to create to enable your policy.
+After confirming your settings using [report-only mode](../conditional-access/howto-conditional-access-insights-reporting.md), an administrator can move the **Enable policy** toggle from **Report-only** to **On**.
+
+## Migrate risk policies from Identity Protection to Conditional Access
+
+While Identity Protection also provides two risk policies with limited conditions, we highly recommend setting up risk-based policies in Conditional Access for the following benefits:
+
+ - Enhanced diagnostic data
+ - Report-only mode integration
+ - Graph API support
+ - Use more Conditional Access attributes like sign-in frequency in the policy
+
+If you already have risk policies enabled in Identity Protection, we highly recommend that you migrate them to Conditional Access:
+
+![Screenshots showing the migration of a sign-in risk policy to Conditional Access.](./media/howto-identity-protection-configure-risk-policies/sign-in-risk-policy-migration.png)
+
+### Migrating to Conditional Access
+
+1. **Create an equivalent** [user risk-based](#user-risk-policy-in-conditional-access) and [sign-in risk-based ](#sign-in-risk-policy-in-conditional-access) policy in Conditional Access in report-only mode. You can create a policy with the steps above or using [Conditional Access templates](../conditional-access/concept-conditional-access-policy-common.md#common-conditional-access-policies) based on Microsoft's recommendations and your organizational requirements.
+ 1. Ensure that the new Conditional Access risk policy works as expected by testing it in [report-only mode](../conditional-access/howto-conditional-access-insights-reporting.md).
+1. **Enable** the new Conditional Access risk policy. You can choose to have both policies running side-by-side to confirm the new policies are working as expected before turning off the Identity Protection risk policies.
+ 1. Browse back to **Azure Active Directory** > **Security** > **Conditional Access**.
+ 1. Select this new policy to edit it.
+ 1. Set **Enable policy** to **On** to enable the policy
+1. **Disable** the old risk policies in Identity Protection.
+ 1. Browse to **Azure Active Directory** > **Identity Protection** > Select the **User risk** or **Sign-in risk** policy.
+ 1. Set **Enforce policy** to **Off**
+1. Create other risk policies if needed in [Conditional Access](../conditional-access/concept-conditional-access-policy-common.md).
+ ## Next steps -- [Enable Azure AD Multi-Factor Authentication registration policy](howto-identity-protection-configure-mfa-policy.md)
+- [Enable Azure AD multifactor authentication registration policy](howto-identity-protection-configure-mfa-policy.md)
- [What is risk](concept-identity-protection-risks.md) - [Investigate risk detections](howto-identity-protection-investigate-risk.md) - [Simulate risk detections](howto-identity-protection-simulate-risk.md)-- [Require reauthentication every time](../conditional-access/howto-conditional-access-session-lifetime.md#require-reauthentication-every-time)
+- [Require reauthentication every time](../conditional-access/howto-conditional-access-session-lifetime.md#require-reauthentication-every-time)
active-directory Howto Identity Protection Remediate Unblock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-remediate-unblock.md
For more information about what happens when confirming compromise, see the sect
### Self-remediation with risk policy
-If you allow users to self-remediate, with Azure AD Multi-Factor Authentication (MFA) and self-service password reset (SSPR) in your risk policies, they can unblock themselves when risk is detected. These detections are then considered closed. Users must have previously registered for Azure AD MFA and SSPR for use when risk is detected.
+If you allow users to self-remediate, with Azure AD multifactor authentication (MFA) and self-service password reset (SSPR) in your risk policies, they can unblock themselves when risk is detected. These detections are then considered closed. Users must have previously registered for Azure AD MFA and SSPR for use when risk is detected.
Some detections may not raise risk to the level where a user self-remediation would be required but administrators should still evaluate these detections. Administrators may determine that extra measures are necessary like [blocking access from locations](../conditional-access/howto-conditional-access-policy-location.md) or lowering the acceptable risk in their policies.
active-directory Howto Identity Protection Simulate Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-simulate-risk.md
More information about each risk detection can be found in the article, What is
Completing the following procedure requires you to use: - The [Tor Browser](https://www.torproject.org/projects/torbrowser.html.en) to simulate anonymous IP addresses. You might need to use a virtual machine if your organization restricts using the Tor browser.-- A test account that isn't yet registered for Azure AD Multi-Factor Authentication.
+- A test account that isn't yet registered for Azure AD multifactor authentication.
**To simulate a sign-in from an anonymous IP, perform the following steps**:
The procedure below uses a newly created:
Completing the following procedure requires you to use a user account that has: - At least a 30-day sign-in history.-- Azure AD Multi-Factor Authentication enabled.
+- Azure AD multifactor authentication enabled.
**To simulate a sign-in from an unfamiliar location, perform the following steps**:
active-directory Groups Approval Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-approval-workflow.md
Title: Approve activation requests for group members and owners in Privileged Identity Management - Azure AD description: Learn how to approve or deny requests for role-assignable groups in Azure AD Privileged Identity Management (PIM).
na Previously updated : 06/24/2022 Last updated : 08/16/2022 -+
When you activate a role in Privileged Identity Management, the activation may n
## Next steps
+- [Create an access review of Privileged Access Groups (preview)](../governance/create-access-review-privileged-access-groups.md)
- [Extend or renew group assignments in Privileged Identity Management](pim-resource-roles-renew-extend.md) - [Email notifications in Privileged Identity Management](pim-email-notifications.md) - [Approve or deny requests for group assignments in Privileged Identity Management](azure-ad-pim-approval-workflow.md)
active-directory Workbook Risk Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-risk-analysis.md
Last updated 08/26/2022 -+ - # Identity protection risk analysis workbook Azure AD Identity Protection detects, remediates, and prevents compromised identities. As an IT administrator, you want to understand risk trends in your organizations and opportunities for better policy configuration. With the Identity Protection Risky Analysis Workbook, you can answer common questions about your Identity Protection implementation. This article provides you with an overview of this workbook. - ## Description ![Workbook category](./media/workbook-risk-analysis/workbook-category.png) -
-As an IT administrator, you need to understand trends in identity risks and gaps in your policy implementations to ensure you are best protecting your organizations from identity compromise. The identity protection risk analysis workbook helps you analyze the state of risk in your organization.
+As an IT administrator, you need to understand trends in identity risks and gaps in your policy implementations, to ensure you're best protecting your organizations from identity compromise. The identity protection risk analysis workbook helps you analyze the state of risk in your organization.
**This workbook:** - Provides visualizations of where in the world risk is being detected.- - Allows you to understand the trends in real time vs. Offline risk detections.- - Provides insight into how effective you are at responding to risky users. -
-
-
- ## Sections This workbook has five sections: - Heatmap of risk detections- - Offline vs real-time risk detections- - Risk detection trends- - Risky users- - Summary ---
-
-- ## Filters - This workbook supports setting a time range filter. - ![Set time range filter](./media/workbook-risk-analysis/time-range-filter.png) There are more filters in the risk detection trends and risky users sections.
There are more filters in the risk detection trends and risky users sections.
Risk Detection Trends: - Detection timing type (real-time or offline)- - Risk level (low, medium, high, or none) Risky Users: - Risk detail (which indicates what changed a userΓÇÖs risk level)- - Risk level (low, medium, high, or none) - ## Best practices
+- **[Enable risky sign-in policies](../identity-protection/concept-identity-protection-policies.md#sign-in-risk-based-conditional-access-policy)** - To prompt for multi-factor authentication (MFA) on medium risk or above. Enabling the policy reduces the proportion of active real-time risk detections by allowing legitimate users to self-remediate the risk detections with MFA.
-- **[Enable risky sign-in policies](../identity-protection/concept-identity-protection-policies.md)** - To prompt for multi-factor authentication (MFA) on medium risk or above. Enabling the policy reduces the proportion of active real-time risk detections by allowing legitimate users to self-remediate the risk detections with MFA.--- **[Enable a risky user policy](../identity-protection/howto-identity-protection-configure-risk-policies.md#user-risk-with-conditional-access)** - To enable users to securely remediate their accounts when they are high risk. Enabling the policy reduces the number of active at-risk users in your organization by returning the userΓÇÖs credentials to a safe state.----
+- **[Enable a risky user policy](../identity-protection/howto-identity-protection-configure-risk-policies.md#user-risk-policy-in-conditional-access)** - To enable users to securely remediate their accounts when they're high risk. Enabling the policy reduces the number of active at-risk users in your organization by returning the userΓÇÖs credentials to a safe state.
## Next steps - To learn more about identity protection, see [What is identity protection](../identity-protection/overview-identity-protection.md). - - For more information about Azure AD workbooks, see [How to use Azure AD workbooks](howto-use-azure-monitor-workbooks.md).-
active-directory Admin Units Members Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-add.md
Previously updated : 06/30/2022 Last updated : 10/05/2022
This article describes how to add users, groups, or devices to administrative un
- Azure AD Premium P1 or P2 license for each administrative unit administrator - Azure AD Free licenses for administrative unit members-- Privileged Role Administrator or Global Administrator
+- To add existing users, groups, or devices:
+ - Privileged Role Administrator or Global Administrator
+- To create new groups:
+ - Groups Administrator (scoped to the administrative unit or entire directory) or Global Administrator
- Microsoft Graph PowerShell - Admin consent when using Graph explorer for Microsoft Graph API
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Users with this role can't change the credentials or reset MFA for members and o
Assign the Permissions Management Administrator role to users who need to do the following tasks: -- Manage all aspects of Entry Permissions Management, when the service is present
+- Manage all aspects of Entra Permissions Management, when the service is present
Learn more about Permissions Management roles and polices at [View information about roles/policies](../cloud-infrastructure-entitlement-management/how-to-view-role-policy.md).
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
Title: Cluster configuration in Azure Kubernetes Services (AKS)
description: Learn how to configure a cluster in Azure Kubernetes Service (AKS) Previously updated : 09/29/2022 Last updated : 10/04/2022 # Configure an AKS cluster
To remove Node Restriction from a cluster.
az aks update -n aks -g myResourceGroup --disable-node-restriction ```
-## OIDC Issuer (Preview)
+## OIDC Issuer
This enables an OIDC Issuer URL of the provider which allows the API server to discover public signing keys. > [!WARNING]
-> Enable/disable OIDC Issuer changes the current service account token issuer to a new value, which causes some down time and make API server restart. If the application pods based on service account token keep in failed status after enable/disable OIDC Issuer, it's recommended to restart the pods manually.
+> Enable or disable OIDC Issuer changes the current service account token issuer to a new value, which can cause down time and restarts the API server. If the application pods using a service token remain in a failed state after you enable or disable the OIDC Issuer, we recommend you manually restart the pods.
-### Before you begin
-
-You must have the following resource installed:
-
-* The Azure CLI
-* The `aks-preview` extension version 0.5.50 or higher
-* Kubernetes version 1.19.x or higher
-
-### Install the aks-preview Azure CLI extension
--
-To install the aks-preview extension, run the following command:
-
-```azurecli
-az extension add --name aks-preview
-```
-
-Run the following command to update to the latest version of the extension released:
+### Prerequisites
-```azurecli
-az extension update --name aks-preview
-```
+* The Azure CLI version 2.42.0 or higher. Run `az --version` to find your version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+* AKS version 1.22 and higher. If your cluster is running version 1.21 and the OIDC Issuer preview is enabled, we recommend you upgrade the cluster to the minimum required version supported.
### Create an AKS cluster with OIDC Issuer
To get the OIDC Issuer URL, run the following command. Replace the default value
az aks show -n myAKScluster -g myResourceGroup --query "oidcIssuerProfile.issuerUrl" -otsv ```
+### Rotate the OIDC key
+
+To rotate the OIDC key, perform the following command. Replace the default values for the cluster name and the resource group name.
+
+```azurecli-interactive
+az aks oidc-issuer rotate-signing-keys -n myAKSCluster -g myResourceGroup
+```
+
+> [!Important]
+> Once you rotate the key, the old key (key1) expires after 24 hours. This means that both the old key (key1) and the new key (key2) are valid within the 24-hour period. If you want to invalidate the old key (key1) immediately, you need to rotate the OIDC key twice. Then key2 and key3 are valid, and key1 is invalid.
+ ## Next steps -- Learn how [upgrade the node images](node-image-upgrade.md) in your cluster.
+- Learn how to [upgrade the node images](node-image-upgrade.md) in your cluster.
- See [Upgrade an Azure Kubernetes Service (AKS) cluster](upgrade-cluster.md) to learn how to upgrade your cluster to the latest version of Kubernetes. - Read more about [`containerd` and Kubernetes](https://kubernetes.io/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/) - See the list of [Frequently asked questions about AKS](faq.md) to find answers to some common AKS questions.
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
To validate that the secrets are mounted at the volume path that's specified in
[az-aks-show]: /cli/azure/aks#az-aks-show [az-rest]: /cli/azure/reference-index#az-rest [az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az-identity-federated-credential-create
-[enable-oidc-issuer]: cluster-configuration.md#oidc-issuer-preview
+[enable-oidc-issuer]: cluster-configuration.md#oidc-issuer
<!-- LINKS EXTERNAL -->
aks Deploy Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md
Included among these solutions are Kubernetes application-based Container offers
## Register resource providers
-You must have registered the `Microsoft.KubernetesConfiguration` and `Microsoft.ContainerService` providers on your subscription using the `az provider register` command:
+You must have registered the `Microsoft.ContainerService` and `Microsoft.KubernetesConfiguration` providers on your subscription using the `az provider register` command:
```azurecli-interactive
-az provider register --namespace Microsoft.KubernetesConfiguration --wait
az provider register --namespace Microsoft.ContainerService --wait
+az provider register --namespace Microsoft.KubernetesConfiguration --wait
``` ## Browse offers
az provider register --namespace Microsoft.ContainerService --wait
- > [!IMPORTANT] > The *Azure Containers* category includes both Kubernetes applications and standalone container images. This walkthrough is Kubernetes application-specific. If you find the steps to deploy an offer differ in some way, you are most likely trying to deploy a container image-based offer instead of a Kubernetes-application based offer.
+ >
+ > To ensure you're searching for Kubernetes applications, include the term `KubernetesApps` in your search.
- Once you've decided on an application, click on the offer.
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md
Title: Create a private Azure Kubernetes Service cluster
description: Learn how to create a private Azure Kubernetes Service (AKS) cluster Previously updated : 05/27/2022 Last updated : 10/05/2022
Private cluster is available in public regions, Azure Government, and Azure Chin
## Prerequisites
-* Azure CLI >= 2.28.0 or Azure CLI with aks-preview extension 0.5.29 or later.
-* If using ARM or the rest API, the AKS API version must be 2021-05-01 or later.
-* The Private Link service is supported on Standard Azure Load Balancer only. Basic Azure Load Balancer isn't supported.
-* To use a custom DNS server, add the Azure DNS IP 168.63.129.16 as the upstream DNS server in the custom DNS server. For more information about the Azure DNS IP address, see [What is IP address 168.63.129.16?][virtual-networks-168.63.129.16]
+* The Azure CLI version 2.28.0 and higher.
+* The aks-preview extension 0.5.29 or higher.
+* If using ARM or the Azure REST API, the AKS API version must be 2021-05-01 or higher.
+* Azure Private Link service is supported on Standard Azure Load Balancer only. Basic Azure Load Balancer isn't supported.
+* To use a custom DNS server, add the Azure public IP address 168.63.129.16 as the upstream DNS server in the custom DNS server. For more information about the Azure IP address, see [What is IP address 168.63.129.16?][virtual-networks-168.63.129.16]
## Create a private AKS cluster
Create a resource group or use an existing resource group for your AKS cluster.
az group create -l westus -n MyResourceGroup ```
-### Default basic networking
+### Default basic networking
```azurecli-interactive az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --load-balancer-sku standard --enable-private-cluster
az aks create \
--dns-service-ip 10.2.0.10 \ --service-cidr 10.2.0.0/24 ```
-Where `--enable-private-cluster` is a mandatory flag for a private cluster.
+
+Where `--enable-private-cluster` is a mandatory flag for a private cluster.
> [!NOTE] > If the Docker bridge address CIDR (172.17.0.1/16) clashes with the subnet CIDR, change the Docker bridge address appropriately.
+## Use custom domains
+
+If you want to configure custom domains that can only be resolved internally, see [Use custom domains][use-custom-domains] for more information.
+ ## Disable Public FQDN The following parameters can be leveraged to disable Public FQDN.
az aks create -n <private-cluster-name> -g <private-cluster-resource-group> --lo
az aks update -n <private-cluster-name> -g <private-cluster-resource-group> --disable-public-fqdn ```
-## Configure Private DNS Zone
+## Configure Private DNS Zone
The following parameters can be leveraged to configure Private DNS Zone.
Creating a VM in the same VNET as the AKS cluster is the easiest option. Express
## Virtual network peering As mentioned, virtual network peering is one way to access your private cluster. To use virtual network peering, you need to set up a link between virtual network and the private DNS zone.
-
+ 1. Go to the node resource group in the Azure portal.
-2. Select the private DNS zone.
+2. Select the private DNS zone.
3. In the left pane, select the **Virtual network** link. 4. Create a new link to add the virtual network of the VM to the private DNS zone. It takes a few minutes for the DNS zone link to become available. 5. In the Azure portal, navigate to the resource group that contains your cluster's virtual network.
Once the A record is created, link the private DNS zone to the virtual network t
> [!WARNING] > If the private cluster is stopped and restarted, the private cluster's original private link service is removed and re-created, which breaks the connection between your private endpoint and the private cluster. To resolve this issue, delete and re-create any user created private endpoints linked to the private cluster. DNS records will also need to be updated if the re-created private endpoints have new IP addresses.
-## Limitations
+## Limitations
+ * IP authorized ranges can't be applied to the private API server endpoint, they only apply to the public API server * [Azure Private Link service limitations][private-link-service] apply to private clusters. * No support for Azure DevOps Microsoft-hosted Agents with private clusters. Consider using [Self-hosted Agents](/azure/devops/pipelines/agents/agents?tabs=browser). * If you need to enable Azure Container Registry to work with a private AKS cluster, [set up a private link for the container registry in the cluster virtual network][container-registry-private-link] or set up peering between the Container Registry virtual network and the private cluster's virtual network. * No support for converting existing AKS clusters into private clusters
-* Deleting or modifying the private endpoint in the customer subnet will cause the cluster to stop functioning.
+* Deleting or modifying the private endpoint in the customer subnet will cause the cluster to stop functioning.
<!-- LINKS - internal -->
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
[private-link-service]: ../private-link/private-link-service-overview.md#limitations [private-endpoint-service]: ../private-link/private-endpoint-overview.md [virtual-network-peering]: ../virtual-network/virtual-network-peering-overview.md
-[azure-bastion]: ../bastion/tutorial-create-host-portal.md
[express-route-or-vpn]: ../expressroute/expressroute-about-virtual-network-gateways.md
-[devops-agents]: /azure/devops/pipelines/agents/agents
-[availability-zones]: availability-zones.md
[command-invoke]: command-invoke.md [container-registry-private-link]: ../container-registry/container-registry-private-link.md [virtual-networks-name-resolution]: ../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server [virtual-networks-168.63.129.16]: ../virtual-network/what-is-ip-address-168-63-129-16.md
+[use-custom-domains]: coredns-custom.md#use-custom-domains
app-service App Service Ip Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-ip-restrictions.md
ms.assetid: 3be1f4bd-8a81-4565-8a56-528c037b24bd Previously updated : 09/01/2022 Last updated : 10/05/2022 # Set up Azure App Service access restrictions
-By setting up access restrictions, you can define a priority-ordered allow/deny list that controls network access to your app. The list can include IP addresses or Azure Virtual Network subnets. When there are one or more entries, an implicit *deny all* exists at the end of the list.
+By setting up access restrictions, you can define a priority-ordered allow/deny list that controls network access to your app. The list can include IP addresses or Azure Virtual Network subnets. When there are one or more entries, an implicit *deny all* exists at the end of the list. To learn more about access restrictions, go to the [access restrictions overview](./overview-access-restrictions.md).
The access restriction capability works with all Azure App Service-hosted workloads. The workloads can include web apps, API apps, Linux apps, Linux custom containers and Functions.
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md
In addition to configuring the Health check options, you can also configure the
| App setting name | Allowed values | Description | |-|-|-| |`WEBSITE_HEALTHCHECK_MAXPINGFAILURES` | 2 - 10 | The required number of failed requests for an instance to be deemed unhealthy and removed from the load balancer. For example, when set to `2`, your instances will be removed after `2` failed pings. (Default value is `10`) |
-|`WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT` | 0 - 100 | By default, no more than half of the instances will be excluded from the load balancer at one time to avoid overwhelming the remaining healthy instances. For example, if an App Service Plan is scaled to four instances and three are unhealthy, two will be excluded. The other two instances (one healthy and one unhealthy) will continue to receive requests. In the worst-case scenario where all instances are unhealthy, none will be excluded. <br /> To override this behavior, set app setting to a value between `0` and `100`. A higher value means more unhealthy instances will be removed (default value is `50`). |
+|`WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT` | 1 - 100 | By default, no more than half of the instances will be excluded from the load balancer at one time to avoid overwhelming the remaining healthy instances. For example, if an App Service Plan is scaled to four instances and three are unhealthy, two will be excluded. The other two instances (one healthy and one unhealthy) will continue to receive requests. In the worst-case scenario where all instances are unhealthy, none will be excluded. <br /> To override this behavior, set app setting to a value between `0` and `100`. A higher value means more unhealthy instances will be removed (default value is `50`). |
#### Authentication and security
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
Title: Integrate your app with an Azure virtual network
description: Integrate your app in Azure App Service with Azure virtual networks. Previously updated : 09/27/2022 Last updated : 10/05/2022
Through application routing or configuration routing options, you can configure
Application routing applies to traffic that is sent from your app after it has been started. See [configuration routing](#configuration-routing) for traffic during startup. When you configure application routing, you can either route all traffic or only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) into your virtual network. You configure this behavior through the **Route All** setting. If **Route All** is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that **Route All** is enabled. * Only traffic configured in application or configuration routing is subject to the NSGs and UDRs that are applied to your integration subnet.
-* When **Route All** is enabled, outbound traffic from your app is still sent from the addresses that are listed in your app properties, unless you provide routes that direct the traffic elsewhere.
+* When **Route All** is enabled, the source address for your outbound public traffic from your app is still one of the IP addresses that are listed in your app properties. If you route your traffic through a firewall or a NAT gateway, the source IP address will then originate from this service.
Learn [how to configure application routing](./configure-vnet-integration-routing.md).
app-service Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md
compliant with the specific standard.
- **Configure Function app slots to use the latest TLS version** - New policy created - **App Service apps should use latest 'HTTP Version'**
- - Updated scope to include Windows apps
+ - Update scope to include Windows apps
- **Function apps should use latest 'HTTP Version'**
- - Updated scope to include Windows apps
+ - Update scope to include Windows apps
+- **App Service Environment apps should not be reachable over public internet**
+ - Modify policy definition to remove check on API version
### September 2022
app-service Webjobs Sdk How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-sdk-how-to.md
For more information about binding expressions, see [Binding expressions and pat
Sometimes you want to specify a queue name, a blob name or container, or a table name in code rather than hard-coding it. For example, you might want to specify the queue name for the `QueueTrigger` attribute in a configuration file or environment variable.
-You can do that by passing a `NameResolver` object in to the `JobHostConfiguration` object. You include placeholders in trigger or binding attribute constructor parameters, and your `NameResolver` code provides the actual values to be used in place of those placeholders. You identify placeholders by surrounding them with percent (%) signs, as shown here:
+You can do that by passing a custom name resolver during configuration. You include placeholders in trigger or binding attribute constructor parameters, and your resolver code provides the actual values to be used in place of those placeholders. You identify placeholders by surrounding them with percent (%) signs, as shown here:
```cs public static void WriteLog([QueueTrigger("%logqueue%")] string logMessage)
public static void WriteLog([QueueTrigger("%logqueue%")] string logMessage)
This code lets you use a queue named `logqueuetest` in the test environment and one named `logqueueprod` in production. Instead of a hard-coded queue name, you specify the name of an entry in the `appSettings` collection.
-There's a default `NameResolver` that takes effect if you don't provide a custom one. The default gets values from app settings or environment variables.
+There's a default resolver that takes effect if you don't provide a custom one. The default gets values from app settings or environment variables.
-Your `NameResolver` class gets the queue name from `appSettings`, as shown here:
+Starting in .NET Core 3.1, the [`ConfigurationManager`](/dotnet/api/system.configuration.configurationmanager) you use requires the [System.Configuration.ConfigurationManager NuGet package](https://www.nuget.org/packages/System.Configuration.ConfigurationManager). The sample requires the following `using` statement:
+
+```cs
+using System.Configuration;
+```
+
+Your `NameResolver` class gets the queue name from app settings, as shown here:
```cs public class CustomNameResolver : INameResolver
applied-ai-services V3 0 Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/v3-0-sdk-rest-api.md
recommendations: false
# Use Form Recognizer SDKs or REST API | v3.0
- In this how-to guide, you'll learn how to add Form Recognizer to your applications and workflows using a programming language SDK of your choice or the REST API. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
+ In this guide, you'll learn how to add Form Recognizer to your applications and workflows using a programming language SDK of your choice or the REST API. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key text and structure elements from documents. We recommend that you use the free service as you're learning the technology. Remember that the number of free pages is limited to 500 per month.
-In this project, you'll learn how-to use the following Form Recognizer models to analyze and extract data and values from forms and documents:
+Choose from the following Form Recognizer models to analyze and extract data and values from forms and documents:
> [!div class="checklist"] >
In this project, you'll learn how-to use the following Form Recognizer models to
> > * The [prebuilt-tax.us.w2](../concept-w2.md) model extracts information reported on US Internal Revenue Service (IRS) tax forms. >
-> * The [prebuilt-invoice](../concept-invoice.md) model extracts key fields and line items from sales invoices of various formats and quality including phone-captured images, scanned documents, and digital PDFs.
+> * The [prebuilt-invoice](../concept-invoice.md) model extracts key fields and line items from sales invoices in various formats and quality including phone-captured images, scanned documents, and digital PDFs.
> > * The [prebuilt-receipt](../concept-receipt.md) model extracts key information from printed and handwritten sales receipts. >
-> * The [prebuilt-idDocument](../concept-id-document.md) model extracts key information from US Drivers Licenses, international passport biographical pages, US state IDs, social security cards, and permanent resident (green) cards.
+> * The [prebuilt-idDocument](../concept-id-document.md) model extracts key information from US drivers licenses, international passport biographical pages, US state IDs, social security cards, and permanent resident (green) cards.
> > * The [prebuilt-businessCard](../concept-business-card.md) model extracts key information from business card images.
applied-ai-services Try V3 Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-form-recognizer-studio.md
To create custom models, you start with configuring your project:
### Labeling as tables > [!NOTE]
-> Tables are currently only supported for custom template models. When training a custom neural model, labeled tables are ignored.
+> * With the release of API versions 2022-06-30-preview and later, custom template models will add support for [cross page tabular fields (tables)](../concept-custom-template.md#tabular-fields).
+> * With the release of API versions 2022-06-30-preview and later, custom neural models will support [tabular fields (tables)](../concept-custom-template.md#tabular-fields) and models trained with API version 2022-08-31, or later will accept tabular field labels.
1. Use the Delete command to delete models that aren't required.
automation Update Agent Issues Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues-linux.md
This check verifies that your machine has access to the endpoints needed by the
Fix this issue by allowing the [prerequisite URLs](../automation-network-configuration.md#update-management-and-change-tracking-and-inventory). Post making Network changes you can either rerun the Troubleshooter or
-Curl on provided OMS endpoint
+Curl on provided ODS endpoint.
### Log Analytics endpoint 2
automation Update Agent Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-agent-issues.md
To fix, follow the steps to [Enable TLS 1.2](../../azure-monitor/agents/agent-wi
## Monitoring agent service health checks
-### Monitoring Agent
-To fix the issue, start **HealthService** service
-
-```
-Start-Service -Name *HealthService* -ErrorAction SilentlyContinue
-```
- ### Hybrid Runbook Worker To fix the issue, do a force re-registration of Hybrid Runbook Worker.
To validate, check event id *15003 (HW start event) OR 15004 (hw stopped event)
Raise a support ticket if the issue is not fixed still.
-### Monitoring Agent Service
-
-Check the event id 4502 (error event) in **Operations Manager** event logs and check the description.
-
-To troubleshoot, run the [MMA Agent Troubleshooter](../../azure-monitor/agents/agent-windows-troubleshoot.md).
- ### VMs linked workspace See [Network requirements](../../azure-monitor/agents/agent-windows-troubleshoot.md#connectivity-issues).
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|
-| OpenShift 4.7.13 | 1.20.0 | 1.0.0_2021-07-30 | 15.0.2148.140 | postgres 12.3 (Ubuntu 12.3-1)|
+| OpenShift 4.10.32 | v1.23.5 | v1.11.0_2022-09-13 | 16.0.312.4243 | postgres 12.3 (Ubuntu 12.3-1)|
### VMware
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
This parameter specifies a resource in Azure Resource Manager to delete from Azu
> [!NOTE] > If you have deployed one or more Azure VM extensions to your Azure Arc-enabled server and you delete its registration in Azure, the extensions remain installed and may continue performing their functions. Any machine intended to be retired or no longer managed by Azure Arc-enabled servers should first have its [extensions removed](#step-1-remove-vm-extensions) before removing its registration from Azure.
-To disconnect using a service principal, run the following command:
+To disconnect using a service principal, run the command below. Be sure to specify a service principal that has the required roles for disconnecting servers; this will not be the same service principal that was used to onboard the server:
`azcmagent disconnect --service-principal-id <serviceprincipalAppID> --service-principal-secret <serviceprincipalPassword>`
azure-cache-for-redis Cache Event Grid Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-event-grid-quickstart-cli.md
sitename=<your-site-name>
az deployment group create \ --resource-group <resource_group_name> \
- --template-uri "https://raw.githubusercontent.com/Azure-Samples/azure-event-grid-viewer/master/azuredeploy.json" \
+ --template-uri "https://raw.githubusercontent.com/Azure-Samples/azure-event-grid-viewer/main/azuredeploy.json" \
--parameters siteName=$sitename hostingPlanName=viewerhost ```
azure-cache-for-redis Cache Event Grid Quickstart Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-event-grid-quickstart-powershell.md
$sitename="<your-site-name>"
New-AzResourceGroupDeployment ` -ResourceGroupName $resourceGroup `
- -TemplateUri "https://raw.githubusercontent.com/Azure-Samples/azure-event-grid-viewer/master/azuredeploy.json" `
+ -TemplateUri "https://raw.githubusercontent.com/Azure-Samples/azure-event-grid-viewer/main/azuredeploy.json" `
-siteName $sitename ` -hostingPlanName viewerhost ```
azure-cache-for-redis Cache Redis Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-redis-samples.md
This sample shows how to:
* Use Redis sets to implement tagging * Work with Redis Cluster
-For more information, see the [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) documentation on GitHub. For more usage scenarios, see the [StackExchange.Redis.Tests](https://github.com/StackExchange/StackExchange.Redis/tree/master/tests) unit tests.
+For more information, see the [StackExchange.Redis](https://github.com/StackExchange/StackExchange.Redis) documentation on GitHub. For more usage scenarios, see the [StackExchange.Redis.Tests](https://github.com/StackExchange/StackExchange.Redis/tree/main/tests) unit tests.
[How to use Azure Cache for Redis with Python](cache-python-get-started.md) shows how to get started with Azure Cache for Redis using Python and the [redis-py](https://github.com/andymccurdy/redis-py) client.
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
description: Learn how to use a .NET isolated process to run your C# functions i
Previously updated : 07/06/2022 Last updated : 09/29/2022 recommendations: false #Customer intent: As a developer, I need to know how to create functions that run in an isolated process so that I can run my function code on current (not LTS) releases of .NET.
The following is an example of a middleware implementation which reads the `Http
For a more complete example of using custom middleware in your function app, see the [custom middleware reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/CustomMiddleware).
+## Cancellation tokens
+
+A function can accept a [CancellationToken](/dotnet/api/system.threading.cancellationtoken) parameter, which enables the operating system to notify your code when the function is about to be terminated. You can use this notification to make sure the function doesn't terminate unexpectedly in a way that leaves data in an inconsistent state.
+
+Cancellation tokens are supported in .NET functions when running in an isolated process. The following example shows how to use a cancellation token in a function:
++
+## ReadyToRun
+
+You can compile your function app as [ReadyToRun binaries](/dotnet/core/deploying/ready-to-run). ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the impact of [cold-start](event-driven-scaling.md#cold-start) when running in a [Consumption plan](consumption-plan.md).
+
+ReadyToRun is available in .NET 3.1, .NET 6 (both in-process and isolated process), and .NET 7, and it requires [version 3.0 or later](functions-versions.md) of the Azure Functions runtime.
+
+To compile your project as ReadyToRun, update your project file by adding the `<PublishReadyToRun>` and `<RuntimeIdentifier>` elements. The following is the configuration for publishing to a Windows 32-bit function app.
+
+```xml
+<PropertyGroup>
+ <TargetFramework>net6.0</TargetFramework>
+ <AzureFunctionsVersion>v4</AzureFunctionsVersion>
+ <RuntimeIdentifier>win-x86</RuntimeIdentifier>
+ <PublishReadyToRun>true</PublishReadyToRun>
+</PropertyGroup>
+```
+ ## Execution context .NET isolated passes a [FunctionContext] object to your function methods. This object lets you get an [ILogger] instance to write to the logs by calling the [GetLogger] method and supplying a `categoryName` string. To learn more, see [Logging](#logging).
This section describes the current state of the functional and behavioral differ
| Feature/behavior | In-process | Out-of-process | | - | - | - |
-| .NET versions | .NET Core 3.1<br/>.NET 6.0 | .NET 6.0<br/>.NET 7.0 (Preview)<br/>.NET Framework 4.8 (Preview) |
+| .NET versions | .NET Core 3.1<br/>.NET 6.0 | .NET 6.0<br/>.NET 7.0 (Preview)<br/>.NET Framework 4.8 (GA) |
| Core packages | [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) | [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/)<br/>[Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk) | | Binding extension packages | [Microsoft.Azure.WebJobs.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.WebJobs.Extensions) | [Microsoft.Azure.Functions.Worker.Extensions.*](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions) | | Durable Functions | [Supported](durable/durable-functions-overview.md) | [Supported (public preview)](https://github.com/microsoft/durabletask-dotnet#usage-with-azure-functions) |
This section describes the current state of the functional and behavioral differ
| Middleware | Not supported | [Supported](#middleware) | | Logging | [ILogger] passed to the function<br/>[ILogger&lt;T&gt;] via dependency injection | [ILogger]/[ILogger&lt;T&gt;] obtained from [FunctionContext] or via [dependency injection](#dependency-injection)| | Application Insights dependencies | [Supported](functions-monitoring.md#dependencies) | [Supported (public preview)](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.ApplicationInsights) |
-| Cancellation tokens | [Supported](functions-dotnet-class-library.md#cancellation-tokens) | Not supported |
+| Cancellation tokens | [Supported](functions-dotnet-class-library.md#cancellation-tokens) | [Supported](#cancellation-tokens) |
| Cold start times<sup>2</sup> | (Baseline) | Additionally includes process launch |
-| ReadyToRun | [Supported](functions-dotnet-class-library.md#readytorun) | _TBD_ |
+| ReadyToRun | [Supported](functions-dotnet-class-library.md#readytorun) | [Supported](#readytorun) |
<sup>1</sup> When you need to interact with a service using parameters determined at runtime, using the corresponding service SDKs directly is recommended over using imperative bindings. The SDKs are less verbose, cover more scenarios, and have advantages for error handling and debugging purposes. This recommendation applies to both models.
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Title: App settings reference for Azure Functions description: Reference documentation for the Azure Functions app settings or environment variables. Previously updated : 04/27/2022 Last updated : 10/04/2022 # App settings reference for Azure Functions
The version of the Functions runtime that hosts your function app. A tilde (`~`)
|Key|Sample value| |||
-|FUNCTIONS\_EXTENSION\_VERSION|`~3`|
+|FUNCTIONS\_EXTENSION\_VERSION|`~4`|
+
+The following major runtime version values are supported:
+
+| Value | Runtime target | Comment |
+| | -- | |
+| `~4` | 4.x | Recommended |
+| `~3` | 3.x | Support ends December 13, 2022 |
+| `~2` | 2.x | No longer supported |
+| `~1` | 1.x | Supported |
## FUNCTIONS\_V2\_COMPATIBILITY\_MODE
This setting enables your function app to run in a version 2.x compatible mode o
>[!IMPORTANT] > This setting is intended only as a short-term workaround while you update your app to run correctly on version 3.x. This setting is supported as long as the [2.x runtime is supported](functions-versions.md). If you encounter issues that prevent your app from running on version 3.x without using this setting, please [report your issue](https://github.com/Azure/azure-functions-host/issues/new?template=Bug_report.md).
-Requires that [FUNCTIONS\_EXTENSION\_VERSION](functions-app-settings.md#functions_extension_version) be set to `~3`.
+Requires that [FUNCTIONS\_EXTENSION\_VERSION](#functions_extension_version) be set to `~3`.
|Key|Sample value| |||
To learn more, see [`pip` documentation for `--index-url`](https://pip.pypa.io/e
## PIP\_EXTRA\_INDEX\_URL
-The value for this setting indicates a extra index URL for custom packages for Python apps, to use in addition to the `--index-url`. Use this setting when you need to run a remote build using custom dependencies that are found in an extra package index. Should follow the same rules as --index-url.
+The value for this setting indicates an extra index URL for custom packages for Python apps, to use in addition to the `--index-url`. Use this setting when you need to run a remote build using custom dependencies that are found in an extra package index. Should follow the same rules as --index-url.
|Key|Sample value| |||
Indicates whether all outbound traffic from the app is routed through the virtua
||| |WEBSITE\_VNET\_ROUTE\_ALL|`1`|
+## App Service site settings
+
+Some configurations must be maintained at the App Service level as site settings, such as language versions. These settings are usually set in the portal, by using REST APIs, or by using Azure CLI or Azure PowerShell. The following are site settings that could be required, depending on your runtime language, OS, and versions:
+
+### linuxFxVersion
+
+For function apps running on Linux, `linuxFxVersion` indicates the language and version for the language-specific worker process. This information is used, along with [`FUNCTIONS_EXTENSION_VERSION`](#functions_extension_version), to determine which specific Linux container image is installed to run your function app. This setting can be set to a pre-defined value or a custom image URI.
+
+This value is set for you when you create your Linux function app. You may need to set it for ARM template and Bicep deployments and in certain upgrade scenarios.
+
+#### Valid linuxFxVersion values
+
+You can use the following Azure CLI command to see a table of current `linuxFxVersion` values, by supported Functions runtime version:
+
+```azurecli-interactive
+az functionapp list-runtimes --os linux --query "[].{stack:join(' ', [runtime, version]), LinuxFxVersion:linux_fx_version, SupportedFunctionsVersions:to_string(supported_functions_versions[])}" --output table
+```
+
+The previous command requires you to upgrade to version 2.40 of the Azure CLI.
+
+#### Custom images
+
+When you create and maintain your own custom linux container for your function app, the `linuxFxVersion` value is also in the format `DOCKER|<IMAGE_URI>`, as in the following example:
+
+```
+linuxFxVersion = "DOCKER|contoso.com/azurefunctionsimage:v1.0.0"
+```
+For more information, see [Create a function on Linux using a custom container](functions-create-function-linux-custom-image.md).
++
+### netFrameworkVersion
+
+Sets the specific version of .NET for C# functions. For more information, see [Migrating from 3.x to 4.x](functions-versions.md#migrating-from-3x-to-4x).
+
+### powerShellVersion
+
+Sets the specific version of PowerShell on which your functions run. For more information, see [Changing the PowerShell version](functions-reference-powershell.md#changing-the-powershell-version).
+
+When running locally, you instead use the [`FUNCTIONS_WORKER_RUNTIME_VERSION`](functions-reference-powershell.md#running-local-on-a-specific-version) setting in the local.settings.json file.
+ ## Next steps [Learn how to update app settings](functions-how-to-use-azure-function-app-settings.md#settings)
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md
This section describes the configuration settings available for this binding, wh
} ```
-When you set the `isSessionsEnabled` property or attribute on [the trigger](functions-bindings-service-bus-trigger.md) to `true`, the `sessionHandlerOptions` is honored. When you set the `isSessionsEnabled` property or attribute on [the trigger](functions-bindings-service-bus-trigger.md) to `false`, the `messageHandlerOptions` is honored.
+When you set the `isSessionsEnabled` property or attribute on [the trigger](functions-bindings-service-bus-trigger.md) to `true`, the `sessionHandlerOptions` is honored. When you set the `isSessionsEnabled` property or attribute on [the trigger](functions-bindings-service-bus-trigger.md) to `false`, the `messageHandlerOptions` is honored.
+
+The `clientRetryOptions` settings only apply to interactions with the Service Bus service. They don't affect retries of function executions. For more information, see [Retries](functions-bindings-error-pages.md#retries).
+ |Property |Default | Description | ||||
azure-functions Functions Create Function Linux Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-function-linux-custom-image.md
Title: Create Azure Functions on Linux using a custom image description: Learn how to create Azure Functions running on a custom Linux image. Previously updated : 09/28/2022 Last updated : 10/04/2022 zone_pivot_groups: programming-languages-set-functions-full
In this tutorial, you learn how to:
You can follow this tutorial on any computer running Windows, macOS, or Linux. + [!INCLUDE [functions-requirements-cli](../../includes/functions-requirements-cli.md)] <!Requirements specific to Docker >
func start
``` ::: zone-end
-After you see the `HttpExample` endpoint appear in the output, navigate to `http://localhost:7071/api/HttpExample?name=Functions`. The browser must display a "hello" message that echoes back `Functions`, the value supplied to the `name` query parameter.
+After you see the `HttpExample` endpoint written to the output, navigate to `http://localhost:7071/api/HttpExample?name=Functions`. The browser must display a "hello" message that echoes back `Functions`, the value supplied to the `name` query parameter.
Press **Ctrl**+**C** to stop the host.
After verifying the function app in the container, press **Ctrl**+**C** to stop
Docker Hub is a container registry that hosts images and provides image and container services. To share your image, which includes deploying to Azure, you must push it to a registry.
-1. If you haven't already signed in to Docker, do so with the [docker login](https://docs.docker.com/engine/reference/commandline/login/) command, replacing `<docker_id>` with your Docker Hub account ID. This command prompts you for your username and password. A "Login Succeeded" message confirms that you're signed in.
+1. If you haven't already signed in to Docker, do so with the [`docker login`](https://docs.docker.com/engine/reference/commandline/login/) command, replacing `<docker_id>` with your Docker Hub account ID. This command prompts you for your username and password. A "sign in Succeeded" message confirms that you're signed in.
```console docker login ```
-1. After you've signed in, push the image to Docker Hub by using the [docker push](https://docs.docker.com/engine/reference/commandline/push/) command, again replace the `<docker_id>` with your Docker Hub account ID.
+1. After you've signed in, push the image to Docker Hub by using the [`docker push`](https://docs.docker.com/engine/reference/commandline/push/) command, again replace the `<docker_id>` with your Docker Hub account ID.
```console docker push <docker_id>/azurefunctionsimage:v1.0.0
Use the following commands to create these items. Both Azure CLI and PowerShell
az login ```
- The [az login](/cli/azure/reference-index#az-login) command signs you into your Azure account.
+ The [`az login`](/cli/azure/reference-index#az-login) command signs you into your Azure account.
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
Use the following commands to create these items. Both Azure CLI and PowerShell
az group create --name AzureFunctionsContainers-rg --location <REGION> ```
- The [az group create](/cli/azure/group#az-group-create) command creates a resource group. In the above command, replace `<REGION>` with a region near you, using an available region code returned from the [az account list-locations](/cli/azure/account#az-account-list-locations) command.
+ The [`az group create`](/cli/azure/group#az-group-create) command creates a resource group. In the above command, replace `<REGION>` with a region near you, using an available region code returned from the [az account list-locations](/cli/azure/account#az-account-list-locations) command.
# [Azure PowerShell](#tab/azure-powershell)
Use the following commands to create these items. Both Azure CLI and PowerShell
New-AzResourceGroup -Name AzureFunctionsContainers-rg -Location <REGION> ```
- The [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command creates a resource group. You generally create your resource group and resources in a region near you, using an available region returned from the [Get-AzLocation](/powershell/module/az.resources/get-azlocation) cmdlet.
+ The [`New-AzResourceGroup`](/powershell/module/az.resources/new-azresourcegroup) command creates a resource group. You generally create your resource group and resources in a region near you, using an available region returned from the [`Get-AzLocation`](/powershell/module/az.resources/get-azlocation) cmdlet.
Use the following commands to create these items. Both Azure CLI and PowerShell
az storage account create --name <STORAGE_NAME> --location <REGION> --resource-group AzureFunctionsContainers-rg --sku Standard_LRS ```
- The [az storage account create](/cli/azure/storage/account#az-storage-account-create) command creates the storage account.
+ The [`az storage account create`](/cli/azure/storage/account#az-storage-account-create) command creates the storage account.
# [Azure PowerShell](#tab/azure-powershell)
Use the following commands to create these items. Both Azure CLI and PowerShell
New-AzStorageAccount -ResourceGroupName AzureFunctionsContainers-rg -Name <STORAGE_NAME> -SkuName Standard_LRS -Location <REGION> ```
- The [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) cmdlet creates the storage account.
+ The [`New-AzStorageAccount`](/powershell/module/az.storage/new-azstorageaccount) cmdlet creates the storage account.
Use the following commands to create these items. Both Azure CLI and PowerShell
We use the Premium plan here, which can scale as needed. For more information about hosting, see [Azure Functions hosting plans comparison](functions-scale.md). For more information on how to calculate costs, see the [Functions pricing page](https://azure.microsoft.com/pricing/details/functions/).
- The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
+ The command also creates an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
## Create and configure a function app on Azure with the image
A function app on Azure manages the execution of your functions in your hosting
az functionapp create --name <APP_NAME> --storage-account <STORAGE_NAME> --resource-group AzureFunctionsContainers-rg --plan myPremiumPlan --deployment-container-image-name <DOCKER_ID>/azurefunctionsimage:v1.0.0 ```
- In the [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command, the *deployment-container-image-name* parameter specifies the image to use for the function app. You can use the [az functionapp config container show](/cli/azure/functionapp/config/container#az-functionapp-config-container-show) command to view information about the image used for deployment. You can also use the [az functionapp config container set](/cli/azure/functionapp/config/container#az-functionapp-config-container-set) command to deploy from a different image.
+ In the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command, the *deployment-container-image-name* parameter specifies the image to use for the function app. You can use the [az functionapp config container show](/cli/azure/functionapp/config/container#az-functionapp-config-container-show) command to view information about the image used for deployment. You can also use the [`az functionapp config container set`](/cli/azure/functionapp/config/container#az-functionapp-config-container-set) command to deploy from a different image.
> [!NOTE] > If you're using a custom container registry, then the *deployment-container-image-name* parameter will refer to the registry URL.
A function app on Azure manages the execution of your functions in your hosting
az storage account show-connection-string --resource-group AzureFunctionsContainers-rg --name <STORAGE_NAME> --query connectionString --output tsv ```
- The connection string for the storage account is returned by using the [az storage account show-connection-string](/cli/azure/storage/account) command.
+ The connection string for the storage account is returned by using the [`az storage account show-connection-string`](/cli/azure/storage/account) command.
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
A function app on Azure manages the execution of your functions in your hosting
$string = "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=" + $storage_name + ";AccountKey=" + $key Write-Output($string) ```
- The key returned by the [Get-AzStorageAccountKey](/powershell/module/az.storage/get-azstorageaccountkey) cmdlet is used to construct the connection string for the storage account.
+ The key returned by the [`Get-AzStorageAccountKey`](/powershell/module/az.storage/get-azstorageaccountkey) cmdlet is used to construct the connection string for the storage account.
A function app on Azure manages the execution of your functions in your hosting
```azurecli az functionapp config appsettings set --name <APP_NAME> --resource-group AzureFunctionsContainers-rg --settings AzureWebJobsStorage=<CONNECTION_STRING> ```
- The [az functionapp config appsettings set](/cli/azure/functionapp/config/appsettings#az-functionapp-config-ppsettings-set) command creates the setting.
+ The [`az functionapp config appsettings set`](/cli/azure/functionapp/config/appsettings#az-functionapp-config-ppsettings-set) command creates the setting.
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell Update-AzFunctionAppSetting -Name <APP_NAME> -ResourceGroupName AzureFunctionsContainers-rg -AppSetting @{"AzureWebJobsStorage"="<CONNECTION_STRING>"} ```
- The [Update-AzFunctionAppSetting](/powershell/module/az.functions/update-azfunctionappsetting) cmdlet creates the setting.
+ The [`Update-AzFunctionAppSetting`](/powershell/module/az.functions/update-azfunctionappsetting) cmdlet creates the setting.
You can enable Azure Functions to automatically update your deployment of an ima
az functionapp deployment container config --enable-cd --query CI_CD_URL --output tsv --name <APP_NAME> --resource-group AzureFunctionsContainers-rg ```
- The [az functionapp deployment container config](/cli/azure/functionapp/deployment/container#az-functionapp-deployment-container-config) command enables continuous deployment and returns the deployment webhook URL. You can retrieve this URL at any later time by using the [az functionapp deployment container show-cd-url](/cli/azure/functionapp/deployment/container#az-functionapp-deployment-container-show-cd-url) command.
+ The [`az functionapp deployment container config`](/cli/azure/functionapp/deployment/container#az-functionapp-deployment-container-config) command enables continuous deployment and returns the deployment webhook URL. You can retrieve this URL at any later time by using the [`az functionapp deployment container show-cd-url`](/cli/azure/functionapp/deployment/container#az-functionapp-deployment-container-show-cd-url) command.
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
You can enable Azure Functions to automatically update your deployment of an ima
Get-AzWebAppContainerContinuousDeploymentUrl -Name <APP_NAME> -ResourceGroupName AzureFunctionsContainers-rg ```
- The `DOCKER_ENABLE_CI` application setting controls whether continuous deployment is enabled from the container repository. The [Get-AzWebAppContainerContinuousDeploymentUrl](/powershell/module/az.websites/get-azwebappcontainercontinuousdeploymenturl) cmdlet returns the URL of the deployment webhook.
+ The `DOCKER_ENABLE_CI` application setting controls whether continuous deployment is enabled from the container repository. The [`Get-AzWebAppContainerContinuousDeploymentUrl`](/powershell/module/az.websites/get-azwebappcontainercontinuousdeploymenturl) cmdlet returns the URL of the deployment webhook.
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
If you install the Core Tools using the Windows installer (MSI) package or by us
## ReadyToRun
-You can compile your function app as [ReadyToRun binaries](/dotnet/core/whats-new/dotnet-core-3-0#readytorun-images). ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the impact of [cold-start](event-driven-scaling.md#cold-start) when running in a [Consumption plan](consumption-plan.md).
+You can compile your function app as [ReadyToRun binaries](/dotnet/core/deploying/ready-to-run). ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the impact of [cold-start](event-driven-scaling.md#cold-start) when running in a [Consumption plan](consumption-plan.md).
-ReadyToRun is available in .NET 3.0 and requires [version 3.0 of the Azure Functions runtime](functions-versions.md).
+ReadyToRun is available in .NET 3.1 and .NET 6 (in-proc and isolated) and .NET 7 and requires [version 3.0 or 4.0 of the Azure Functions runtime](functions-versions.md).
To compile your project as ReadyToRun, update your project file by adding the `<PublishReadyToRun>` and `<RuntimeIdentifier>` elements. The following is the configuration for publishing to a Windows 32-bit function app. ```xml <PropertyGroup>
- <TargetFramework>netcoreapp3.1</TargetFramework>
- <AzureFunctionsVersion>v3</AzureFunctionsVersion>
+ <TargetFramework>net6.0</TargetFramework>
+ <AzureFunctionsVersion>v4</AzureFunctionsVersion>
<PublishReadyToRun>true</PublishReadyToRun> <RuntimeIdentifier>win-x86</RuntimeIdentifier> </PropertyGroup> ``` > [!IMPORTANT]
-> ReadyToRun currently doesn't support cross-compilation. You must build your app on the same platform as the deployment target. Also, pay attention to the "bitness" that is configured in your function app. For example, if your function app in Azure is Windows 64-bit, you must compile your app on Windows with `win-x64` as the [runtime identifier](/dotnet/core/rid-catalog).
+> Starting in .NET 6, support for Composite ReadyToRun compilation has been added. Check out [ReadyToRun Cross platform and architecture restrictions](/dotnet/core/deploying/ready-to-run).
You can also build your app with ReadyToRun from the command line. For more information, see the `-p:PublishReadyToRun=true` option in [`dotnet publish`](/dotnet/core/tools/dotnet-publish).
azure-functions Functions Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-powershell.md
The following table shows the PowerShell versions available to each major versio
| Functions version | PowerShell version | .NET version | |-|--||
-| 4.x (recommended) | PowerShell 7.2<br/>PowerShell 7 (recommended) | .NET 6 |
-| 3.x | PowerShell 7<br/>PowerShell Core 6 | .NET Core 3.1<br/>.NET Core 2.1 |
-| 2.x | PowerShell Core 6 | .NET Core 2.2 |
+| 4.x (recommended) | PowerShell 7.2 (recommended) <br/>PowerShell 7 | .NET 6 |
+| 3.x | PowerShell 7 | .NET Core 3.1 |
You can see the current version by printing `$PSVersionTable` from any function.
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
Title: Azure Functions runtime versions overview
description: Azure Functions supports multiple versions of the runtime. Learn the differences between them and how to choose the one that's right for you. Previously updated : 09/23/2022 Last updated : 10/04/2022 zone_pivot_groups: programming-languages-set-functions
Azure Functions provides a pre-upgrade validator to help you identify potential
### Migrate without slots
-The simplest way to upgrade to v4.x is to set the `FUNCTIONS_EXTENSION_VERSION` application setting to `~4` on your function app in Azure. When your function app runs on Windows, you also need to update the `netFrameworkVersion` site setting in Azure. You must follow a [different procedure](#migrate-using-slots) on a site with slots.
+The simplest way to upgrade to v4.x is to set the `FUNCTIONS_EXTENSION_VERSION` application setting to `~4` on your function app in Azure. You must follow a [different procedure](#migrate-using-slots) on a site with slots.
# [Azure CLI](#tab/azure-cli)
Update-AzFunctionAppSetting -AppSetting @{FUNCTIONS_EXTENSION_VERSION = "~4"} -N
-When running on Windows, you also need to enable .NET 6.0, which is required by version 4.x of the runtime.
+# [Windows](#tab/windows/azure-cli)
-# [Azure CLI](#tab/azure-cli)
+When running on Windows, you also need to enable .NET 6.0, which is required by version 4.x of the runtime.
```azurecli az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> ```
-# [Azure PowerShell](#tab/azure-powershell)
+.NET 6 is required for function apps in any language running on Windows.
+
+# [Windows](#tab/windows/azure-powershell)
+
+When running on Windows, you also need to enable .NET 6.0, which is required by version 4.x of the runtime.
```azurepowershell Set-AzWebApp -NetFrameworkVersion v6.0 -Name <APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> ```+
+.NET 6 is required for function apps in any language running on Windows.
+
+# [Linux](#tab/linux/azure-cli)
+
+When running .NET apps on Linux, you also need to update the `linuxFxVersion` site setting for .NET 6.0.
+
+```azurecli
+az functionapp config set --name <APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --linux-fx-version "DOTNET|6.0"
+```
+
+# [Linux](#tab/linux/azure-powershell)
+
+When running .NET apps on Linux, you also need to update the `linuxFxVersion` site setting. Unfortunately, Azure PowerShell can't be used to set the `linuxFxVersion` at this time. Use the Azure CLI instead.
+
-In these examples, replace `<APP_NAME>` with the name of your function app and `<RESOURCE_GROUP_NAME>` with the name of the resource group.
+In this example, replace `<APP_NAME>` with the name of your function app and `<RESOURCE_GROUP_NAME>` with the name of the resource group.
### Migrate using slots
The [`Update-AzFunctionAppSetting`](/powershell/module/az.functions/update-azfun
az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME> ```
-1. (Windows only) For function apps running on Windows, use the following command so that the runtime can run on .NET 6:
+1. Version 4.x of the Functions runtime requires .NET 6 in Windows. On Linux, .NET apps must also upgrade to .NET 6. Use the following command so that the runtime can run on .NET 6:
+ # [Windows](#tab/windows)
+
+ When running on Windows, you also need to enable .NET 6.0, which is required by version 4.x of the runtime.
+ ```azurecli
- az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
+ az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
```
- Version 4.x of the Functions runtime requires .NET 6 when running on Windows.
+ .NET 6 is required for function apps in any language running on Windows.
+
+ # [Linux](#tab/linux/azure-cli)
+
+ When running .NET functions on Linux, you also need to update the `linuxFxVersion` site setting for .NET 6.0.
+
+ ```azurecli
+ az functionapp config set --name <APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --linux-fx-version "DOTNET|6.0"
+ ```
+
+
+
+ In this example, replace `<APP_NAME>` with the name of your function app and `<RESOURCE_GROUP_NAME>` with the name of the resource group.
1. If your code project required any updates to run on version 4.x, deploy those updates to the staging slot now.
To minimize the downtime in your production app, you can swap the `WEBSITE_OVERR
az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME> ```
-1. (Windows only) For function apps running on Windows, use the following command so that the runtime can run on .NET 6:
+1. Version 4.x of the Functions runtime requires .NET 6 in Windows. On Linux, .NET apps must also upgrade to .NET 6. Use the following command so that the runtime can run on .NET 6:
+ # [Windows](#tab/windows)
+
+ When running on Windows, you also need to enable .NET 6.0, which is required by version 4.x of the runtime.
+ ```azurecli
- az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
+ az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
```
- Version 4.x of the Functions runtime requires .NET 6 when running on Windows.
+ .NET 6 is required for function apps in any language running on Windows.
+
+ # [Linux](#tab/linux/azure-cli)
+
+ When running .NET functions on Linux, you also need to update the `linuxFxVersion` site setting for .NET 6.0.
+
+ ```azurecli
+ az functionapp config set --name <APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --linux-fx-version "DOTNET|6.0"
+ ```
+
+
+
+ In this example, replace `<APP_NAME>` with the name of your function app and `<RESOURCE_GROUP_NAME>` with the name of the resource group.
1. If your code project required any updates to run on version 4.x, deploy those updates to the staging slot now.
azure-functions Set Runtime Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/set-runtime-version.md
Title: How to target Azure Functions runtime versions description: Azure Functions supports multiple versions of the runtime. Learn how to specify the runtime version of a function app hosted in Azure.- Previously updated : 07/22/2020 Last updated : 10/04/2022
You can change the runtime version used by your function app. Because of the pot
You can also view and set the `FUNCTIONS_EXTENSION_VERSION` from the Azure CLI.
-Using the Azure CLI, view the current runtime version with the [az functionapp config appsettings list](/cli/azure/functionapp/config/appsettings) command.
+Using the Azure CLI, view the current runtime version with the [`az functionapp config appsettings list`](/cli/azure/functionapp/config/appsettings) command.
```azurecli-interactive az functionapp config appsettings list --name <function_app> \
az functionapp config appsettings set --name <FUNCTION_APP> \
Replace `<FUNCTION_APP>` with the name of your function app. Also replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<VERSION>` with either a specific version, or `~4`, `~3`, `~2`, or `~1`.
-Choose **Try it** in the previous code example to run the command in [Azure Cloud Shell](../cloud-shell/overview.md). You can also run the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command. When running locally, you must first run [az login](/cli/azure/reference-index#az-login) to sign in.
+Choose **Try it** in the previous code example to run the command in [Azure Cloud Shell](../cloud-shell/overview.md). You can also run the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command. When running locally, you must first run [`az login`](/cli/azure/reference-index#az-login) to sign in.
# [PowerShell](#tab/powershell)
As before, replace `<FUNCTION_APP>` with the name of your function app and `<RES
The function app restarts after the change is made to the application setting.
-## Manual version updates on Linux
+## <a name="manual-version-updates-on-linux"></a>Pin to a specific version on Linux
-To pin a Linux function app to a specific host version, you specify the image URL in the 'LinuxFxVersion' field in site config. For example: if we want to pin a node 10 function app to say host version 3.0.13142 -
+To pin a Linux function app to a specific host version, you set a version-specific base image URL in the [`linuxFxVersion` site setting][`linuxFxVersion`] in the format `DOCKER|<PINNED_VERSION_IMAGE_URI>`.
-For **linux app service/elastic premium apps** -
-Set `LinuxFxVersion` to `DOCKER|mcr.microsoft.com/azure-functions/node:3.0.13142-node10-appservice`.
+> [!IMPORTANT]
+> Pinned function apps on Linux don't receive regular security and host functionality updates. Unless recommended by a support professional, use the [`FUNCTIONS_EXTENSION_VERSION`](functions-app-settings.md#functions_extension_version) setting and a standard [`linuxFxVersion`] value for your language and version, such as `Python|3.9`. For valid values, see the [`linuxFxVersion` reference article][`linuxFxVersion`].
+>
+> For apps running in a Consumption plan, setting [`linuxFxVersion`] to a specific image may also increase cold start times. This is because pinning to a specific image prevents Functions from using some cold start optimizations.
-For **linux consumption apps** -
-Set `LinuxFxVersion` to `DOCKER|mcr.microsoft.com/azure-functions/mesh:3.0.13142-node10`.
+The following table provides an example of [`linuxFxVersion`] values required to pin a Node.js 18 function app to a specific runtime version of 4.11.2:
-# [Portal](#tab/portal)
+| [Hosting plan](functions-scale.md) | [`linuxFxVersion` value][`linuxFxVersion`] |
+| | |
+| Consumption | `DOCKER\|mcr.microsoft.com/azure-functions/mesh:4.11.2-node18` |
+| Premium/Dedicated | `DOCKER\|mcr.microsoft.com/azure-functions/node:4.11.2-node18-appservice` |
-Viewing and modifying site config settings for function apps isn't supported in the Azure portal. Use the Azure CLI instead.
+When needed, a support professional can provide you with a valid base image URI for your application.
-# [Azure CLI](#tab/azurecli)
-
-You can view and set the `LinuxFxVersion` by using the Azure CLI. To know the list of available `LinuxFxVersion`, use [az functionapp list-runtimes](/cli/azure/functionapp#az-functionapp-list-runtimes) command.
+Use the following Azure CLI commands to view and set the [`linuxFxVersion`]. You can't currently set [`linuxFxVersion`] in the portal or by using Azure PowerShell.
To view the current runtime version, use with the [az functionapp config show](/cli/azure/functionapp/config) command.
az functionapp config show --name <function_app> \
--resource-group <my_resource_group> --query 'linuxFxVersion' -o tsv ```
-In this code, replace `<function_app>` with the name of your function app. Also replace `<my_resource_group>` with the name of the resource group for your function app. The current value of `linuxFxVersion` is returned.
+In this code, replace `<function_app>` with the name of your function app. Also replace `<my_resource_group>` with the name of the resource group for your function app. The current value of [`linuxFxVersion`] is returned.
-To update the `linuxFxVersion` setting in the function app, use the [az functionapp config set](/cli/azure/functionapp/config) command.
+To update the [`linuxFxVersion`] setting in the function app, use the [az functionapp config set](/cli/azure/functionapp/config) command.
```azurecli-interactive az functionapp config set --name <FUNCTION_APP> \
az functionapp config set --name <FUNCTION_APP> \
--linux-fx-version <LINUX_FX_VERSION> ```
-Replace `<FUNCTION_APP>` with the name of your function app. Also replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Also, replace `<LINUX_FX_VERSION>` with the value of a specific image as described above.
-
-You can run this command from the [Azure Cloud Shell](../cloud-shell/overview.md) by choosing **Try it** in the preceding code sample. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command after executing [az login](/cli/azure/reference-index#az-login) to sign in.
+Replace `<FUNCTION_APP>` with the name of your function app. Also replace `<RESOURCE_GROUP>` with the name of the resource group for your function app. Finally, replace `<LINUX_FX_VERSION>` with the value of a specific image provided to you by a support professional.
-# [PowerShell](#tab/powershell)
-
-Azure PowerShell can't be used to set the `linuxFxVersion` at this time. Use the Azure CLI instead.
--
+You can run this command from the [Azure Cloud Shell](../cloud-shell/overview.md) by choosing **Try it** in the preceding code sample. You can also use the [Azure CLI locally](/cli/azure/install-azure-cli) to execute this command after executing [`az login`](/cli/azure/reference-index#az-login) to sign in.
The function app restarts after the change is made to the site config.
-> [!NOTE]
-> For apps running in a Consumption plan, setting `LinuxFxVersion` to a specific image may increase cold start times. This is because pinning to a specific image prevents Functions from using some cold start optimizations.
- ## Next steps > [!div class="nextstepaction"]
The function app restarts after the change is made to the site config.
> [!div class="nextstepaction"] > [See Release notes for runtime versions](https://github.com/Azure/azure-webjobs-sdk-script/releases)+
+[`linuxFxVersion`]: functions-app-settings.md#linuxfxversion
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
The table below provides a more detailed explanation of what these values mean,
|`AppContainsDiagnosticSourceAssembly**:true`|This value indicates that extension detected references to `System.Diagnostics.DiagnosticSource` in the application, and will back-off.| For ASP.NET remove the reference. |`IKeyExists:false`|This value indicates that the instrumentation key isn't present in the AppSetting, `APPINSIGHTS_INSTRUMENTATIONKEY`. Possible causes: The values may have been accidentally removed, forgot to set the values in automation script, etc. | Make sure the setting is present in the App Service application settings.
+### System.IO.FileNotFoundException after 2.8.44 upgrade
+
+2.8.44 version of auto instrumentation upgrades Application Insights SDK to 2.20.0. Application Insights SDK has an indirect reference to `System.Runtime.CompilerServices.Unsafe.dll` through `System.Diagnostics.DiagnosticSource.dll`. If application has [binding redirect](https://learn.microsoft.com/dotnet/framework/configure-apps/file-schema/runtime/bindingredirect-element) for `System.Runtime.CompilerServices.Unsafe.dll` and if this library is not present in application folder it may throw `System.IO.FileNotFoundException`.
+
+To resolve this issue, remove the binding redirect entry for `System.Runtime.CompilerServices.Unsafe.dll` from web.config file. If the application wanted to use `System.Runtime.CompilerServices.Unsafe.dll` then set the binding redirect as below.
+
+```
+<dependentAssembly>
+ <assemblyIdentity name="System.Runtime.CompilerServices.Unsafe" publicKeyToken="b03f5f7f11d50a3a" culture="neutral" />
+ <bindingRedirect oldVersion="0.0.0.0-4.0.4.1" newVersion="4.0.4.1" />
+</dependentAssembly>
+```
+
+As a temporary workaround, you could set the app setting, ApplicationInsightsAgent_EXTENSION_VERSION to a value of 2.8.37. This will trigger App Service to use the old Application Insights extension. Please note that temporary mitigations should only be used as an interim.
+ ## Release notes For the latest updates and bug fixes [consult the release notes](web-app-extension-release-notes.md).
azure-monitor Data Collector Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collector-api.md
To use the HTTP Data Collector API, you create a POST request that includes the
| Log-Type |Specify the record type of the data that's being submitted. It can contain only letters, numbers, and the underscore (_) character, and it can't exceed 100 characters. | | x-ms-date |The date that the request was processed, in RFC 7234 format. | | x-ms-AzureResourceId | The resource ID of the Azure resource that the data should be associated with. It populates the [_ResourceId](./log-standard-columns.md#_resourceid) property and allows the data to be included in [resource-context](manage-access.md#access-mode) queries. If this field isn't specified, the data won't be included in resource-context queries. |
-| time-generated-field | The name of a field in the data that contains the timestamp of the data item. If you specify a field, its contents are used for **TimeGenerated**. If you don't specify this field, the default for **TimeGenerated** is the time that the message is ingested. The contents of the message field should follow the ISO 8601 format YYYY-MM-DDThh:mm:ssZ. Note: the Time Generated value cannot be older than 2 days before received time or the row will be dropped.|
+| time-generated-field | The name of a field in the data that contains the timestamp of the data item. If you specify a field, its contents are used for **TimeGenerated**. If you don't specify this field, the default for **TimeGenerated** is the time that the message is ingested. The contents of the message field should follow the ISO 8601 format YYYY-MM-DDThh:mm:ssZ. The Time Generated value cannot be older than 2 days before received time or the time that the message is ingested will be used.|
| | | ## Authorization
namespace OIAPIExample
client.DefaultRequestHeaders.Add("x-ms-date", date); client.DefaultRequestHeaders.Add("time-generated-field", TimeStampField);
+ // If charset=utf-8 is part of the content-type header, the API call may return forbidden.
System.Net.Http.HttpContent httpContent = new StringContent(json, Encoding.UTF8); httpContent.Headers.ContentType = new MediaTypeHeaderValue("application/json"); Task<System.Net.Http.HttpResponseMessage> response = client.PostAsync(new Uri(url), httpContent);
azure-percept Azureeyemodule Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/azureeyemodule-overview.md
Title: Azure Percept Vision AI module description: An overview of the azureeyemodule, which is the module responsible for running the AI vision workload on the Azure Percept DK.-+ Previously updated : 08/09/2021 Last updated : 10/04/2022 # Azure Percept Vision AI module + Azureeyemodule is the name of the edge module responsible for running the AI vision workload on the Azure Percept DK. It's part of the Azure IoT suite of edge modules and is deployed to the Azure Percept DK during the [setup experience](./quickstart-percept-dk-set-up.md). This article provides an overview of the module and its architecture. ## Architecture
azure-percept How To Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-deploy-model.md
Title: Deploy a vision AI model to Azure Percept DK description: Learn how to deploy a vision AI model to your Azure Percept DK from Azure Percept Studio-+ Previously updated : 02/12/2021 Last updated : 10/04/2022 # Deploy a vision AI model to Azure Percept DK + Follow this guide to deploy a vision AI model to your Azure Percept DK from within Azure Percept Studio. ## Prerequisites
azure-percept How To Determine Your Update Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-determine-your-update-strategy.md
Title: Determine your update strategy for Azure Percept DK description: Pros and cons of Azure Percept DK OTA or USB cable updates. Recommendation for choosing the best update approach for different users. -+ Previously updated : 08/23/2021 Last updated : 10/04/2022 # Determine your update strategy for Azure Percept DK ++ >[!CAUTION] >**The OTA update on Azure Percept DK is no longer supported. For information on how to proceed, please visit [Update the Azure Percept DK over a USB-C cable connection](./how-to-update-via-usb.md).**
azure-percept How To Get Hardware Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-get-hardware-support.md
Title: Get Azure Percept hardware support from ASUS description: This guide shows you how to contact ASUS for technical support for the Azure Percept DK hardware. -+ Previously updated : 07/13/2021 Last updated : 10/04/2022 # Get Azure Percept hardware support from ASUS + As the OEM for the Azure Percept DK, ASUS provides technical support to all customer who purchased a device and business support for customers interested in purchasing devices. This article shows you how to contact ASUS to get support.
azure-percept How To Manage Voice Assistant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-manage-voice-assistant.md
Title: Manage your Azure Percept voice assistant application description: Configure a voice assistant application within Azure Percept Studio-+ Previously updated : 02/15/2021 Last updated : 10/04/2022 # Manage your Azure Percept voice assistant application + This article describes how to configure the keyword and commands of your voice assistant application within [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819). For guidance on configuring your keyword within IoT Hub instead of the portal, see this [how-to article](./how-to-configure-voice-assistant.md). If you have not yet created a voice assistant application, see [Build a no-code voice assistant with Azure Percept Studio and Azure Percept Audio](./tutorial-no-code-speech.md).
azure-percept How To Set Up Advanced Network Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-set-up-advanced-network-settings.md
Title: Set up advanced network settings on the Azure Percept DK description: This article walks user through the Advanced Network Settings during the Azure Percept DK setup experience-+ Previously updated : 7/19/2021 Last updated : 10/04/2022 # Set up advanced network settings on the Azure Percept DK + The Azure Percept DK allows you to control various networking components on the dev kit. This is done via the Advanced Networking Settings in the setup experience. To access these settings, you must [start the setup experience](./quickstart-percept-dk-set-up.md) and select **Access advanced network settings** on the **Network connection** page. :::image type="content" source="media/how-to-set-up-advanced-network-settings/advanced-ns-entry.png" alt-text="Launch the advanced network settings from the Network connections page":::
azure-percept How To Set Up Over The Air Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-set-up-over-the-air-updates.md
Title: Set up Azure IoT Hub to deploy over-the-air updates description: Learn how to configure Azure IoT Hub to deploy updates over-the-air to Azure Percept DK-+ Previously updated : 03/30/2021 Last updated : 10/04/2022 # Set up Azure IoT Hub to deploy over-the-air updates + >[!CAUTION] >**The OTA update on Azure Percept DK is no longer supported. For information on how to proceed, please visit [Update the Azure Percept DK over a USB-C cable connection](./how-to-update-via-usb.md).**
azure-percept How To Update Via Usb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/how-to-update-via-usb.md
Title: Update Azure Percept DK over a USB-C connection description: Learn how to update the Azure Percept DK over a USB-C cable connection-+ Previously updated : 03/18/2021 Last updated : 10/04/2022 # Update Azure Percept DK over a USB-C connection + This guide will show you how to successfully update your dev kit's operating system and firmware over a USB connection. Here's an overview of what you will be doing during this procedure. 1. Download the update package to a host computer
azure-percept Overview 8020 Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-8020-integration.md
Title: Azure Percept DK 80/20 integration description: Learn more about how Azure Percept DK integrates with the 80/20 railing system.-+ Previously updated : 03/24/2021 Last updated : 10/04/2022 # Azure Percept DK 80/20 integration + The Azure Percept DK and Audio Accessory were designed to integrate with the [80/20 T-slot aluminum building system](https://8020.net/). ## 80/20 features
azure-percept Overview Advanced Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-advanced-code.md
Title: Advanced development with Azure Percept description: Learn more about advanced development tools on Azure Percept-+ Previously updated : 03/23/2021 Last updated : 10/04/2022 # Advanced development with Azure Percept + With Azure Percept, software developers and data scientists can use advanced code workflows for AI lifecycle management. Through a growing open source library, they can use samples to get started with their AI development journey and build production-ready solutions. ## Get started with advanced development
azure-percept Overview Ai Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-ai-models.md
Title: Azure Percept sample AI models description: Learn more about the AI models available for prototyping and deployment-+ Previously updated : 03/23/2021 Last updated : 10/04/2022 # Azure Percept sample AI models + Azure Percept enables you to develop and deploy AI models directly to your [Azure Percept DK](./overview-azure-percept-dk.md) from [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819). Model deployment utilizes [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) and [Azure IoT Edge](https://azure.microsoft.com/services/iot-edge/#iotedge-overview). ## Sample AI models
azure-percept Overview Azure Percept Audio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-azure-percept-audio.md
Title: Azure Percept Audio device overview description: Learn more about Azure Percept Audio-+ Previously updated : 03/23/2021 Last updated : 10/04/2022 # Azure Percept Audio device overview + Azure Percept Audio is an accessory device that adds speech AI capabilities to [Azure Percept DK](./overview-azure-percept-dk.md). It contains a preconfigured audio processor and a four-microphone linear array, enabling you to use voice commands, keyword spotting, and far field speech with the help of Azure Cognitive Services. It is integrated out-of-the-box with Azure Percept DK, [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819), and other Azure edge management services. Azure Percept Audio is available for purchase at the [Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270). > [!div class="nextstepaction"]
azure-percept Overview Azure Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-azure-percept-dk.md
Title: Azure Percept DK and Vision device overview description: Learn more about the Azure Percept DK and Azure Percept Vision-+ Previously updated : 03/23/2021 Last updated : 10/04/2022 # Azure Percept DK and Vision device overview + Azure Percept DK is an edge AI development kit designed for developing vision and audio AI solutions with [Azure Percept Studio](./overview-azure-percept-studio.md). Azure Percept DK is available for purchase at the [Microsoft online store](https://go.microsoft.com/fwlink/p/?LinkId=2155270). > [!div class="nextstepaction"]
azure-percept Overview Azure Percept Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-azure-percept-studio.md
Title: Azure Percept Studio overview v1 description: Learn more about Azure Percept Studio-+ Previously updated : 03/23/2021 Last updated : 10/04/2022 # Azure Percept Studio overview v1 + [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819) is the single launch point for creating edge AI models and solutions. Azure Percept Studio allows you to discover and complete guided workflows that make it easy to integrate edge AI-capable hardware and powerful Azure AI and IoT cloud services. In the Studio, you can see your edge AI-capable devices as end points for collecting initial and ongoing training data as well as deployment targets for model iterations. Having access to devices and training data allows for rapid prototyping and iterative edge AI model development for both [vision](./tutorial-nocode-vision.md) and [speech](./tutorial-no-code-speech.md) scenarios.
azure-percept Overview Azure Percept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-azure-percept.md
Title: Azure Percept overview description: Learn more about the Azure Percept platform-+ Previously updated : 03/23/2021 Last updated : 10/04/2022 # Azure Percept overview + Azure Percept is a family of hardware, software, and services designed to accelerate business transformation using IoT and AI at the edge. Azure Percept covers the full stack from silicon to services to solve the integration challenges of edge AI at scale. The integration challenges one faces when attempting to deploy edge AI solutions at scale can be summed up into three major points of friction:
azure-percept Overview Percept Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-percept-security.md
Title: Azure Percept security description: Learn more about Azure Percept security-+ Previously updated : 03/24/2021 Last updated : 10/04/2022 # Azure Percept security + Azure Percept devices are designed with a hardware root of trust. This built-in security helps protect inference data and privacy-sensitive sensors like cameras and microphones and enables device authentication and authorization for Azure Percept Studio services. > [!NOTE]
azure-percept Overview Update Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-update-experience.md
Title: Azure Percept DK update experience description: Learn more about how to keep the Azure Percept DK up-to-date-+ Previously updated : 03/24/2021 Last updated : 10/04/2022 # Azure Percept DK update experience + With Azure Percept DK, you may update your dev kit OS and firmware over-the-air (OTA) or via USB. OTA updating is an easy way keep devices up-to-date through the [Device Update for IoT Hub](../iot-hub-device-update/index.yml) service. USB updates are available for users who are unable to use OTA updates or when a factory reset of the device is needed. Check out the following how-to guides to get started with Azure Percept DK device updates: - [Set up Azure IoT Hub to deploy over-the-air (OTA) updates to your Azure Percept DK](./how-to-set-up-over-the-air-updates.md)
azure-percept Retirement Of Azure Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/retirement-of-azure-percept-dk.md
+
+ Title: Retirement of Azure Percept DK
+description: Information about the retirement of the Azure Percept DK.
++++ Last updated : 10/04/2022++
+# Retirement of Azure Percept DK
+
+The [Azure Percept](https://azure.microsoft.com/products/azure-percept/) public preview will be evolving to support new edge device platforms and developer experiences. As part of this evolution the Azure Percept DK and Audio Accessory and associated supporting Azure services for the Percept DK will be retired March 30, 2023.
+
+Effective March 30, 2023, the Azure Percept DK and Audio Accessory will no longer be supported by any Azure services including Azure Percept Studio, OS updates, containers updates, view web stream, and Custom Vision integration. Microsoft will no longer provide customer success support and any associated supporting services
azure-percept Troubleshoot Audio Accessory Speech Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/troubleshoot-audio-accessory-speech-module.md
Title: Troubleshoot Azure Percept Audio and speech module description: Get troubleshooting tips for Azure Percept Audio and azureearspeechclientmodule-+ Previously updated : 08/03/2021 Last updated : 10/04/2022 # Troubleshoot Azure Percept Audio and speech module + Use the guidelines below to troubleshoot voice assistant application issues. ## Checking runtime status of the speech module
azure-percept Troubleshoot Dev Kit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/troubleshoot-dev-kit.md
Title: Troubleshoot the Azure Percept DK device description: Get troubleshooting tips for some of the more common issues with Azure Percept DK and IoT Edge-+ Previously updated : 08/10/2021 Last updated : 10/04/2022 # Troubleshoot the Azure Percept DK device ++ The purpose of this troubleshooting article is to help Azure Percept DK users to quickly resolve common issues with their dev kits. It also provides guidance on collecting logs for when extra support is needed. ## Log collection
azure-percept Tutorial No Code Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/tutorial-no-code-speech.md
Title: Create a no-code voice assistant in Azure Percept Studio description: Learn how to create and deploy a no-code speech solution to your Azure Percept DK-+ Previously updated : 02/17/2021 Last updated : 10/04/2022 # Create a no-code voice assistant in Azure Percept Studio ++ In this tutorial, you will create a voice assistant from a template to use with your Azure Percept DK and Azure Percept Audio. The voice assistant demo runs within [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819) and contains a selection of voice-controlled virtual objects. To control an object, say your keyword, which is a word or short phrase that wakes your device, followed by a command. Each template responds to a set of specific commands. This guide will walk you through the process of setting up your devices, creating a voice assistant and the necessary [Speech Services](../cognitive-services/speech-service/overview.md) resources, testing your voice assistant, configuring your keyword, and creating custom keywords.
azure-percept Tutorial Nocode Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/tutorial-nocode-vision.md
Title: Create a no-code vision solution in Azure Percept Studio description: Learn how to create a no-code vision solution in Azure Percept Studio and deploy it to your Azure Percept DK-+ Previously updated : 02/10/2021 Last updated : 10/04/2022 # Create a no-code vision solution in Azure Percept Studio ++ Azure Percept Studio enables you to build and deploy custom computer vision solutions, no coding required. In this article, you will: - Create a vision project in [Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819)
azure-percept Vision Solution Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/vision-solution-troubleshooting.md
Title: Troubleshoot Azure Percept Vision and vision modules description: Get troubleshooting tips for some of the more common issues found in the vision AI prototyping experiences.-+ Previously updated : 03/29/2021 Last updated : 10/04/2022 # Troubleshoot Azure Percept Vision and vision modules ++ This article provides information on troubleshooting no-code vision solutions in Azure Percept Studio. ## Delete a vision project
azure-resource-manager Key Vault Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/key-vault-access.md
Title: Use Azure Key Vault when deploying Managed Applications
description: Shows how to access secrets in Azure Key Vault when deploying Managed Applications. Previously updated : 04/29/2022 Last updated : 10/04/2022 # Access Key Vault secret when deploying Azure Managed Applications
This article describes how to configure the Key Vault to work with Managed Appli
:::image type="content" source="./media/key-vault-access/open-key-vault.png" alt-text="Screenshot of the Azure home page to open a key vault using search or by selecting key vault.":::
-1. Select **Access policies**.
+1. Select **Access policies**.
:::image type="content" source="./media/key-vault-access/select-access-policies.png" alt-text="Screenshot of the key vault setting to select access policies."::: 1. Select **Azure Resource Manager for template deployment**. Then, select **Save**.
- :::image type="content" source="./media/key-vault-access/enable-template.png" alt-text="Screenshot of the key vault's access policies to enable Azure Resource Manager for template deployment.":::
+ :::image type="content" source="./media/key-vault-access/enable-template.png" alt-text="Screenshot of the key vault's access policies that enable Azure Resource Manager for template deployment.":::
## Add service as contributor
-Assign the **Contributor** role to the **Appliance Resource Provider** user at the key vault scope.
+Assign the **Contributor** role to the **Appliance Resource Provider** user at the key vault scope. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+The **Appliance Resource Provider** is a service principal in your Azure Active Directory's tenant. From the Azure portal, you can see if it's registered by going to **Azure Active Directory** > **Enterprise applications** and change the search filter to **Microsoft Applications**. Search for _Appliance Resource Provider_. If it's not found, [register](../troubleshooting/error-register-resource-provider.md) the `Microsoft.Solutions` resource provider.
## Reference Key Vault secret
To pass a secret from a Key Vault to a template in your Managed Application, you
"resources": [ { "type": "Microsoft.Sql/servers",
- "apiVersion": "2021-08-01-preview",
+ "apiVersion": "2022-02-01-preview",
"name": "[variables('sqlServerName')]", "location": "[parameters('location')]", "properties": {
To pass a secret from a Key Vault to a template in your Managed Application, you
You've configured your Key Vault to be accessible during deployment of a Managed Application. - For information about passing a value from a Key Vault as a template parameter, see [Use Azure Key Vault to pass secure parameter value during deployment](../templates/key-vault-parameter.md).
+- To learn more about key vault security, see [Azure Key Vault security](../../key-vault/general/security-features.md) and [Authentication in Azure Key Vault](../../key-vault/general/authentication.md).
- For managed application examples, see [Sample projects for Azure managed applications](sample-projects.md). - To learn how to create a UI definition file for a managed application, see [Get started with CreateUiDefinition](create-uidefinition-overview.md).
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
If you're running CLI commands with Git Bash for Windows, you might get an `Inva
-The **Appliance Resource Provider** is an Azure Enterprise application (service principal). Go to **Azure Active Directory** > **Enterprise applications** and change the search filter to **All Applications**. Search for _Appliance Resource Provider_. If it's not found, [register](../troubleshooting/error-register-resource-provider.md) the `Microsoft.Solutions` resource provider.
+The **Appliance Resource Provider** is a service principal in your Azure Active Directory's tenant. From the Azure portal, you can see if it's registered by going to **Azure Active Directory** > **Enterprise applications** and change the search filter to **Microsoft Applications**. Search for _Appliance Resource Provider_. If it's not found, [register](../troubleshooting/error-register-resource-provider.md) the `Microsoft.Solutions` resource provider.
### Deploy the managed application definition with an ARM template
azure-video-indexer Concepts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/concepts-overview.md
This article gives a brief overview of Azure Video Indexer terminology and concepts.
+## Artifact files
+
+If you plan to download artifact files, beware of the following:
+
+ ## Confidence scores The confidence score indicates the confidence in an insight. It is a number between 0.0 and 1.0. The higher the score the greater the confidence in the answer. For example:
azure-video-indexer Video Indexer View Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-view-edit.md
# View and edit Azure Video Indexer insights
-This topic shows you how to view and edit the Azure Video Indexer insights of a video.
+This article shows you how to view and edit the Azure Video Indexer insights of a video.
1. Browse to the [Azure Video Indexer](https://www.videoindexer.ai/) website and sign in. 2. Find a video from which you want to create your Azure Video Indexer insights. For more information, see [Find exact moments within videos](video-indexer-search.md).
This topic shows you how to view and edit the Azure Video Indexer insights of a
The page shows the video's insights. ![Insights](./media/video-indexer-view-edit/video-indexer-summarized-insights.png)
-4. View the insights of the video.
+4. Select which insights you want to view. For example, faces, keywords, sentiments. You can see the faces of people and the time ranges each face appears in and the % of the time it's shown.
- Summarized insights show an aggregated view of the data: faces, keywords, sentiments. For example, you can see the faces of people and the time ranges each face appears in and the % of the time it is shown.
+ The **Timeline** tab shows transcripts with timelines and other information that you can choose from the **View** drop-down.
- [!INCLUDE [insights](./includes/insights.md)]
-
- Select the **Timeline** tab to see transcripts with timelines and other information that you can choose from the **View** drop-down.
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="./media/video-indexer-view-edit/timeline.png" alt-text="Screenshot that shows how to select the Insights." lightbox="./media/video-indexer-view-edit/timeline.png":::
The player and the insights are synchronized. For example, if you click a keyword or the transcript line, the player brings you to that moment in the video. You can achieve the player/insights view and synchronization in your application. For more information, see [Embed Azure Indexer widgets into your application](video-indexer-embed-widgets.md).
- If you want to download artifact files, beware of the following:
+ For more information, see [Insights output](video-indexer-output-json-v2.md).
+
+## Considerations
+
+- [!INCLUDE [insights](./includes/insights.md)]
+- If you plan to download artifact files, beware of the following:
[!INCLUDE [artifacts](./includes/artifacts.md)]
- For more information, see [Insights output](video-indexer-output-json-v2.md).
-
## Next steps [Use your videos' deep insights](use-editor-create-project.md)
azure-vmware Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-customer-managed-keys.md
+
+ Title: Configure customer-managed key encryption at rest in Azure VMware Solution
+description: Learn how to encrypt data in Azure VMware Solution with customer-managed keys using Azure Key Vault.
+ Last updated : 6/30/2022+++
+# Configure customer-managed key encryption at rest in Azure VMware Solution
++
+This article illustrates how to encrypt VMware vSAN Key Encryption Keys (KEKs) with customer-managed keys (CMKs) managed by customer-owned Azure Key Vault.
+
+When CMK encryptions are enabled on your Azure VMware Solution private cloud, Azure VMware Solution uses the CMK from your key vault to encrypt the vSAN KEKs. Each ESXi host that participates in the vSAN cluster uses randomly generated Disk Encryption Keys (DEKs) that ESXi uses to encrypt disk data at rest. vSAN encrypts all DEKs with a KEK provided by Azure VMware Solution key management system (KMS). Azure VMware Solution private cloud and Azure Key Vault don't need to be in the same subscription.
+
+When managing your own encryption keys, you can do the following actions:
+
+- Control Azure access to vSAN keys.
+- Centrally manage the lifecycle of CMKs.
+- Revoke Azure from accessing the KEK.
+
+The Customer-managed keys (CMKs) feature supports the following key types. See the following key types, shown by key type and key size.
+
+- RSA: 2048, 3072, 4096
+- RSA-HSM: 2048, 3072, 4096
+
+## Topology
+
+The following diagram shows how Azure VMware Solution uses Azure Active Directory (Azure AD) and a key vault to deliver the customer-managed key.
++
+## Prerequisites
+
+Before you begin to enable customer-managed key (CMK) functionality, ensure the following listed requirements are met:
+
+- You'll need an Azure Key Vault to use CMK functionality. If you don't have an Azure Key Vault, you can create one using [Quickstart: Create a key vault using the Azure portal](https://docs.microsoft.com/azure/key-vault/general/quick-create-portal).
+- If you enabled restricted access to key vault, you'll need to allow Microsoft Trusted Services to bypass the Azure Key Vault firewall. Go to [Configure Azure Key Vault networking settings](https://docs.microsoft.com/azure/key-vault/general/how-to-azure-key-vault-network-security?tabs=azure-portal) to learn more.
+ >[!NOTE]
+ >After firewall rules are in effect, users can only perform Key Vault [data plane](https://docs.microsoft.com/azure/key-vault/general/security-features#privileged-access) operations when their requests originate from allowed VMs or IPv4 address ranges. This also applies to accessing key vault from the Azure portal. This also affects the key vault Picker by Azure VMware Solution. Users may be able to see a list of key vaults, but not list keys, if firewall rules prevent their client machine or user does not have list permission in key vault.
+
+- Enable **System Assigned identity** on your Azure VMware Solution private cloud if you didn't enable it during software-defined data center (SDDC) provisioning.
+
+ # [Portal](#tab/azure-portal)
+
+ Use the following steps to enable System Assigned identity:
+
+ 1. Sign in to Azure portal.
+
+ 2. Navigate to **Azure VMware Solution** and locate your SDDC.
+
+ 3. From the left navigation, open **Manage** and select **Identity**.
+
+ 4. In **System Assigned**, check **Enable** and select **Save**.
+ 1. **System Assigned identity** should now be enabled.
+
+ Once System Assigned identity is enabled, you'll see the tab for **Object ID**. Make note of the Object ID for use later.
+
+ # [Azure CLI](#tab/azure-cli)
+
+ Get the private cloud resource ID and save it to a variable. You'll need this value in the next step to update resource with system assigned identity.
+
+ ```azurecli-interactive
+ privateCloudId=$(az vmware private-cloud show --name $privateCloudName --resource-group $resourceGroupName --query id | tr -d '"')
+ ```
+
+ To configure the system-assigned identity on Azure VMware Solution private cloud with Azure CLI, call [az-resource-update](https://docs.microsoft.com/cli/azure/resource?view=azure-cli-latest#az-resource-update) and provide the variable for the private cloud resource ID that you previously retrieved.
+
+ ```azurecli-interactive
+ az resource update --ids $privateCloudId --set identity.type=SystemAssigned --api-version "2021-12-01"
+ ```
+
+- Configure the key vault access policy to grant permissions to the managed identity. It will be used to authorize access to the key vault.
+
+ # [Portal](#tab/azure-portal)
+
+ 1. Sign in to Azure portal.
+ 1. Navigate to **Key vaults** and locate the key vault you want to use.
+ 1. From the left navigation, underΓÇ»**Settings**, selectΓÇ»**Access policies**.
+ 1. InΓÇ»**Access policies**, selectΓÇ»**Add Access Policy**.
+ 1. From the Key Permissions drop-down, check **Select all**, **Unwrap Key**, and **Wrap Key**.
+ 1. Under Select principal, select **None selected**. A new **Principal** window with a search box will open.
+ 1. In the search box, paste the **Object ID** from the previous step, or search the private cloud name you want to use. Choose **Select** when you're done.
+ 1. Select **ADD**.
+ 1. Verify the new policy appears under the current policy's Application section.
+ 1. Select **Save** to commit changes.
+
+ # [Azure CLI](#tab/azure-cli)
+
+ Get the principal ID for the system-assigned managed identity and save it to a variable. You'll need this value in the next step to create the key vault access policy.
+
+ ```azurecli-interactive
+ principalId=$(az vmware private-cloud show --name $privateCloudName --resource-group $resourceGroupName --query identity.principalId | tr -d '"')
+ ```
+
+ To configure the key vault access policy with Azure CLI, call [az keyvault set-policy](https://docs.microsoft.com/cli/azure/keyvault#az-keyvault-set-policy) and provide the variable for the principal ID that you previously retrieved for the managed identity.
+
+ ```azurecli-interactive
+ az keyvault set-policy --name $keyVault --resource-group $resourceGroupName --object-id $principalId --key-permissions get unwrapKey wrapKey
+ ```
+
+ Learn more about how to [Assign an Azure Key Vault access policy](https://docs.microsoft.com/azure/key-vault/general/assign-access-policy?tabs=azure-portal).
++
+## Customer-managed key version lifecycle
+
+You can change the customer-managed key (CMK) by creating a new version of the key. The creation of a new version won't interrupt the virtual machine (VM) workflow.
+
+In Azure VMware Solution, CMK key version rotation will depend on the key selection setting you've chosen during CMK setup.
+
+**Key selection setting 1**
+
+A customer enables CMK encryption without supplying a specific key version for CMK. Azure VMware Solution selects the latest key version for CMK from the customer's key vault to encrypt the vSAN Key Encryption Keys (KEKs). Azure VMware Solution tracks the CMK for version rotation. When a new version of the CMK key in Azure Key Vault is created, it's captured by Azure VMware Solution automatically to encrypt vSAN KEKs.
+
+>[!NOTE]
+>Azure VMware Solution can take up to ten minutes to detect a new auto-rotated key version.
+
+**Key selection setting 2**
+
+A customer can enable CMK encryption for a specified CMK key version to supply the full key version URI under the **Enter Key from URI** option. When the customer's current key expires, they'll need to extend the CMK key expiration or disable CMK.
+
+## Enable CMK with system-assigned identity
+
+System-assigned identity is restricted to one per resource and is tied to the lifecycle of the resource. You can grant permissions to the managed identity on Azure resource. The managed identity is authenticated with Azure AD, so you don't have to store any credentials in code.
+
+>[!IMPORTANT]
+> Ensure that key vault is in the same region as the Azure VMware Solution private cloud.
+
+# [Portal](#tab/azure-portal)
+
+Navigate to your **Azure Key Vault** and provide access to the SDDC on Azure Key Vault using the Principal ID captured in the **Enable MSI** tab.
+
+1. From your Azure VMware Solution private cloud, under **Manage**, select **Encryption**, then select **Customer-managed keys (CMK)**.
+1. CMK provides two options for **Key Selection** from Azure Key Vault.
+
+ **Option 1**
+
+ 1. Under **Encryption key**, choose the **select from Key Vault** button.
+ 1. Select the encryption type, then the **Select Key Vault and key** option.
+ 1. Select the **Key Vault and key** from the drop-down, then choose **Select**.
+
+ **Option 2**
+
+ 1. Under **Encryption key**, choose the **Enter key from URI** button.
+ 1. Enter a specific Key URI in the **Key URI** box.
+
+ > [!IMPORTANT]
+ > If you want to select a specific key version instead of the automatically selected latest version, you'll need to specify the key URI with key version. This will affect the CMK key version life cycle.
+
+1. Select **Save** to grant access to the resource.
+
+# [Azure CLI](#tab/azure-cli)
+
+To configure customer-managed keys for an Azure VMware Solution private cloud with automatic updating of the key version, call [az vmware private-cloud add-cmk-encryption](https://docs.microsoft.com/cli/azure/vmware/private-cloud?view=azure-cli-latest#az-vmware-private-cloud-add-cmk-encryption). Get the key vault URL and save it to a variable. You'll need this value in the next step to enable CMK.
+
+```azurecli-interactive
+keyVaultUrl =$(az keyvault show --name <keyvault_name> --resource-group <resource_group_name> --query properties.vaultUri --output tsv)
+```
+
+Option 1 and 2 below demonstrate the difference between not providing a specific key version and providing one.
+
+**Option 1**
+
+This example shows the customer not providing a specific key version.
+
+```azurecli-interactive
+az vmware private-cloud add-cmk-encryption --private-cloud <private_cloud_name> --resource-group <resource_group_name> --enc-kv-url $keyVaultUrl --enc-kv-key-name <keyvault_key_name>
+```
+
+**Option 2**
+
+Supply key version as argument to use customer-managed keys with a specific key version, same as mentioned above in Azure portal option 2. The following example shows the customer providing a specific key version.
+
+```azurecli-interactive
+az vmware private-cloud add-cmk-encryption --private-cloud <private_cloud_name> --resource-group <resource_group_name> --enc-kv-url $keyVaultUrl --enc-kv-key-name --enc-kv-key-version <keyvault_key_keyVersion>
+```
++
+## Change from customer-managed key to Microsoft managed key
+
+When a customer wants to change from a customer-managed key (CMK) to a Microsoft managed key (MMK), it won't interrupt VM workload. To make the change from CMK to MMK, use the following steps.
+
+1. Select **Encryption**, located under **Manage** from your Azure VMware Solution private cloud.
+2. Select **Microsoft-managed keys (MMK)**.
+3. Select **Save**.
+
+## Limitations
+
+The Azure Key Vault must be configured as recoverable.
+
+- Configure Azure Key Vault with the **Soft Delete** option.
+- Turn on **Purge Protection** to guard against force deletion of the secret vault, even after soft delete.
+
+Updating CMK settings won't work if the key is expired or the Azure VMware Solution access key has been revoked.
+
+## Troubleshooting and best practices
+
+**Accidental deletion of a key**
+
+If you accidentally delete your key in the Azure Key Vault, private cloud won't be able to perform some cluster modification operations. To avoid this scenario, we recommend that you keep soft deletes enabled on key vault. This option ensures that, if a key is deleted, it can be recovered within a 90-day period as part of the default soft-delete retention. If you are within the 90-day period, you can restore the key in order to resolve the issue.
+
+**Restore key vault permission**
+
+If you have a private cloud that lost access to the customer managed key, check if Managed System Identity (MSI) requires permissions in key vault. The error notification returned from Azure may not correctly indicate MSI requiring permissions in key vault as the root cause. Remember, the required permissions are: get, wrapKey, and unwrapKey. See step 4 in [Prerequisites](#prerequisites).
+
+**Fix expired key**
+
+If you aren't using the auto-rotate function and the Customer Managed Key has expired in key vault, you can change the expiration date on key.
+
+**Restore key vault access**
+
+Ensure Managed System Identity (MSI) is used for providing private cloud access to key vault.
+
+**Deletion of MSI**
+
+If you accidentally delete the Managed System Identity (MSI) associated with private cloud, you'll need to disable CMK, then follow the steps to enable CMK from start.
+
+## Next steps
+
+Learn about [Azure Key Vault backup and restore](https://docs.microsoft.com/azure/key-vault/general/backup?tabs=azure-cli)
+
+Learn about [Azure Key Vault recovery](https://docs.microsoft.com/azure/key-vault/general/key-vault-recovery?tabs=azure-portal#list-recover-or-purge-a-soft-deleted-key-vault)
azure-vmware Configure Dhcp Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-dhcp-azure-vmware-solution.md
description: Learn how to configure DHCP by using either NSX-T Manager to host a
Previously updated : 04/08/2022 Last updated : 10/04/2022 # Customer intent: As an Azure service administrator, I want to configure DHCP by using either NSX-T Manager to host a DHCP server or use a third-party external DHCP server.
You can create a DHCP server or relay directly from Azure VMware Solution in the
:::image type="content" source="media/networking/add-dhcp-server-relay.png" alt-text="Screenshot showing how to add a DHCP server or DHCP relay in Azure VMware Solutions.":::
-4. Complete the DHCP configuration by [providing DHCP ranges on the logical segments](tutorial-nsx-t-network-segment.md#use-azure-portal-to-add-an-nsx-t-segment) and then select **OK**.
+4. Complete the DHCP configuration by [providing DHCP ranges on the logical segments](tutorial-nsx-t-network-segment.md#use-azure-portal-to-add-an-nsx-t-data-center-segment) and then select **OK**.
azure-vmware Configure Nsx Network Components Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-nsx-network-components-azure-portal.md
Title: Configure NSX-T Data Center network components using Azure VMware Solutio
description: Learn how to use the Azure VMware Solution to configure NSX-T Data Center network segments. Previously updated : 04/11/2022 Last updated : 10/04/2022 # Customer intent: As an Azure service administrator, I want to configure NSX-T Data Center network components using a simplified view of NSX-T Data Center operations a VMware administrator needs daily. The simplified view is targeted at users unfamiliar with NSX-T Manager.
After deploying Azure VMware Solution, you can configure the necessary NSX-T Dat
You'll have four options to configure NSX-T Data Center components in the Azure VMware Solution console: -- **Segments** - Create segments that display in NSX-T Manager and vCenter Server. For more information, see [Add an NSX-T Data Center segment using the Azure portal](tutorial-nsx-t-network-segment.md#use-azure-portal-to-add-an-nsx-t-segment).
+- **Segments** - Create segments that display in NSX-T Manager and vCenter Server. For more information, see [Add an NSX-T Data Center segment using the Azure portal](tutorial-nsx-t-network-segment.md#use-azure-portal-to-add-an-nsx-t-data-center-segment).
- **DHCP** - Create a DHCP server or DHCP relay if you plan to use DHCP. For more information, see [Use the Azure portal to create a DHCP server or relay](configure-dhcp-azure-vmware-solution.md#use-the-azure-portal-to-create-a-dhcp-server-or-relay).
azure-vmware Deploy Vsan Stretched Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-vsan-stretched-clusters.md
Title: Deploy vSAN stretched clusters
+ Title: Deploy vSAN stretched clusters (Preview)
description: Learn how to deploy vSAN stretched clusters.
Last updated 09/02/2022
-# Deploy vSAN stretched clusters
+# Deploy vSAN stretched clusters (Preview)
In this article, you'll learn how to implement a vSAN stretched cluster for an Azure VMware Solution private cloud.
Stretched clusters allow the configuration of vSAN Fault Domains across two AZs
To protect against split-brain scenarios and help measure site health, a managed vSAN Witness is created in a third AZ. With a copy of the data in each AZ, vSphere HA attempts to recover from any failure using a simple restart of the virtual machine.
-**vSAN stretched cluster**
+The following diagram depicts a vSAN cluster stretched across two AZs.
:::image type="content" source="media/stretch-clusters/diagram-1-vsan-witness-third-availability-zone.png" alt-text="Diagram shows a managed vSAN stretched cluster created in a third Availability Zone with the data being copied to all three of them.":::
In summary, stretched clusters simplify protection needs by providing the same t
It's important to understand that stretched cluster private clouds only offer an extra layer of resiliency, and they don't address all failure scenarios. For example, stretched cluster private clouds: - Don't protect against region-level failures within Azure or data loss scenarios caused by application issues or poorly planned storage policies. - Provides protection against a single zone failure but aren't designed to provide protection against double or progressive failures. For example:
- - Despite various layers of redundancy built into the fabric, if an inter-AZ failure results in the partitioning of the secondary site, vSphere HA starts powering off the workload VMs on the secondary site. The following diagram shows the secondary site partitioning scenario.
+ - Despite various layers of redundancy built into the fabric, if an inter-AZ failure results in the partitioning of the secondary site, vSphere HA starts powering off the workload VMs on the secondary site.
+
+ The following diagram shows the secondary site partitioning scenario.
:::image type="content" source="media/stretch-clusters/diagram-2-secondary-site-power-off-workload.png" alt-text="Diagram shows vSphere high availability powering off the workload virtual machines on the secondary site.":::
- - If the secondary site partitioning progressed into the failure of the primary site instead, or resulted in a complete partitioning, vSphere HA would attempt to restart the workload VMs on the secondary site. If vSphere HA attempted to restart the workload VMs on the secondary site, it would put the workload VMs in an unsteady state. The following diagram shows the preferred site failure or complete partitioning scenario.
+ - If the secondary site partitioning progressed into the failure of the primary site instead, or resulted in a complete partitioning, vSphere HA would attempt to restart the workload VMs on the secondary site. If vSphere HA attempted to restart the workload VMs on the secondary site, it would put the workload VMs in an unsteady state.
+
+
+ The following diagram shows the preferred site failure or complete partitioning scenario.
:::image type="content" source="media/stretch-clusters/diagram-3-restart-workload-secondary-site.png" alt-text="Diagram shows vSphere high availability trying to restart the workload virtual machines on the secondary site when preferred site failure or complete partitioning occurs.":::
-It should be noted that these types of failures, although rare, fall outside the scope of the protection offered by a stretched cluster private cloud. Because of this, a stretched cluster solution should be regarded as a multi-AZ high availability solution reliant upon vSphere HA. It's important you understand that a stretched cluster solution isn't meant to replace a comprehensive multi-region Disaster Recovery strategy that can be employed to ensure application availability. The reason is because a Disaster Recovery solution typically has separate management and control planes in separate Azure regions. Azure VMware Solution stretched clusters have a single management and control plane stretched across two availability zones within the same Azure region. For example, one vCenter Server, one NSX-T Manager cluster, one NSX-T Data Center Edge VM pair.
+It should be noted that these types of failures, although rare, fall outside the scope of the protection offered by a stretched cluster private cloud. Because of those types of rare failures, a stretched cluster solution should be regarded as a multi-AZ high availability solution reliant upon vSphere HA. It's important you understand that a stretched cluster solution isn't meant to replace a comprehensive multi-region Disaster Recovery strategy that can be employed to ensure application availability. The reason is because a Disaster Recovery solution typically has separate management and control planes in separate Azure regions. Azure VMware Solution stretched clusters have a single management and control plane stretched across two availability zones within the same Azure region. For example, one vCenter Server, one NSX-T Manager cluster, one NSX-T Data Center Edge VM pair.
## Deploy a stretched cluster private cloud
-Currently, Azure VMware Solution stretched clusters is in a limited availability phase. In the limited availability phase, you must contact Microsoft to request and qualify for support.
+Currently, Azure VMware Solution stretched clusters is in the (preview) phase. While in the (preview) phase, you must contact Microsoft to request and qualify for support.
## Prerequisites
Azure VMware Solution stretched clusters are available in the following regions:
As of now, the only 3 regions listed above are planned for support of stretched clusters.
-### What kind of SLA does Azure VMware Solution provide with the stretched clusters limited availability release?
+### What kind of SLA does Azure VMware Solution provide with the stretched clusters (preview) release?
A private cloud created with a vSAN stretched cluster is designed to offer a 99.99% infrastructure availability commitment when the following conditions exist: - A minimum of 6 nodes are deployed in the cluster (3 in each availability zone)
No. A stretched cluster is created between two availability zones, while the thi
- Scale out and scale-in of stretched clusters can only happen in pairs. A minimum of 6 nodes and a maximum of 16 nodes are supported in a stretched cluster environment. - Customer workload VMs are restarted with a medium vSphere HA priority. Management VMs have the highest restart priority. - The solution relies on vSphere HA and vSAN for restarts and replication. Recovery time objective (RTO) is determined by the amount of time it takes vSphere HA to restart a VM on the surviving AZ after the failure of a single AZ.-- Preview features for standard private cloud environments aren't supported in a stretched cluster environment. For example, external storage options like disk pools and Azure NetApp Files (ANF), Customer Management Keys, Public IP via NSX-T Data Center Edge, and others.
+- Preview and recent GA features for standard private cloud environments aren't supported in a stretched cluster environment.
- Disaster recovery addons like, VMware SRM, Zerto, and JetStream are currently not supported in a stretched cluster environment. ### What kind of latencies should I expect between the availability zones (AZs)?
-vSAN stretched clusters operate within a 5 minute round trip time (RTT) and 10 Gb/s or greater bandwidth between the AZs that host the workload VMs. The Azure VMware Solution stretched cluster deployment follows that guiding principle. Consider that information when deploying applications (with SFTT of dual site mirroring, which uses synchronous writes) that have stringent latency requirements.
+vSAN stretched clusters operate within a 5-milliseconds round trip time (RTT) and 10 Gb/s or greater bandwidth between the AZs that host the workload VMs. The Azure VMware Solution stretched cluster deployment follows that guiding principle. Consider that information when deploying applications (with SFTT of dual site mirroring, which uses synchronous writes) that have stringent latency requirements.
### Can I mix stretched and standard clusters in my private cloud?
Customers will be charged based on the number of nodes deployed within the priva
### Will I be charged for the witness node and for inter-AZ traffic?
-No. While in limited availability, customers won't see a charge for the witness node and the inter-AZ traffic. The witness node is entirely service managed, and Azure VMware Solution provides the required lifecycle management of the witness node. As the entire solution is service managed, the customer only needs to identify the appropriate SPBM policy to set for the workload virtual machines. The rest is managed by Microsoft.
+No. While in (preview), customers won't see a charge for the witness node and the inter-AZ traffic. The witness node is entirely service managed, and Azure VMware Solution provides the required lifecycle management of the witness node. As the entire solution is service managed, the customer only needs to identify the appropriate SPBM policy to set for the workload virtual machines. The rest is managed by Microsoft.
### Which SKUs are available?
azure-vmware Tutorial Nsx T Network Segment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-nsx-t-network-segment.md
description: Learn how to add a network segment to use for virtual machines (VMs
Previously updated : 07/16/2021 Last updated : 09/24/2022 # Tutorial: Add a network segment in Azure VMware Solution
-After deploying Azure VMware Solution, you can configure an NSX-T network segment from NSX-T Manager or the Azure portal. Once configured, the segments are visible in Azure VMware Solution, NSX-T Manager, and vCenter Server. NSX-T Data Center comes pre-provisioned by default with an NSX-T Tier-0 gateway in **Active/Active** mode and a default NSX-T Tier-1 gateway in **Active/Standby** mode. These gateways let you connect the segments (logical switches) and provide East-West and North-South connectivity.
+After deploying Azure VMware Solution, you can configure an NSX-T Data Center network segment from NSX-T Manager or the Azure portal. Once configured, the segments are visible in Azure VMware Solution, NSX-T Manager, and vCenter Server. NSX-T Data Center comes pre-provisioned by default with an NSX-T Data Center Tier-0 gateway in **Active/Active** mode and a default NSX-T Data Center Tier-1 gateway in **Active/Standby** mode. These gateways let you connect the segments (logical switches) and provide East-West and North-South connectivity.
>[!TIP]
->The Azure portal presents a simplified view of NSX-T operations a VMware administrator needs regularly and targeted at users not familiar with NSX-T Manager.
+>The Azure portal presents a simplified view of NSX-T Data Center operations a VMware administrator needs regularly and targeted at users not familiar with NSX-T Manager.
In this tutorial, you learn how to:
In this tutorial, you learn how to:
An Azure VMware Solution private cloud with access to the vCenter Server and NSX-T Manager interfaces. For more information, see the [Configure networking](tutorial-configure-networking.md) tutorial.
-## Use Azure portal to add an NSX-T segment
+## Use Azure portal to add an NSX-T Data Center segment
[!INCLUDE [create-nsxt-segment-azure-portal-steps](includes/create-nsxt-segment-azure-portal-steps.md)] ## Use NSX-T Manager to add network segment
-The virtual machines (VMs) created in vCenter Server are placed onto the network segments created in NSX-T and are visible in vCenter Server.
+The virtual machines (VMs) created in vCenter Server are placed onto the network segments created in NSX-T Data Center and are visible in vCenter Server.
[!INCLUDE [add-network-segment-steps](includes/add-network-segment-steps.md)]
Verify the presence of the new network segment. In this example, **ls01** is the
1. In NSX-T Manager, select **Networking** > **Segments**.
- :::image type="content" source="media/nsxt/nsxt-new-segment-overview-2.png" alt-text="Screenshot showing the confirmation and status of the new network segment is present in NSX-T.":::
+ :::image type="content" source="media/nsxt/nsxt-new-segment-overview-2.png" alt-text="Screenshot showing the confirmation and status of the new network segment is present in NSX-T Data Center.":::
1. In vCenter Server, select **Networking** > **SDDC-Datacenter**.
Verify the presence of the new network segment. In this example, **ls01** is the
## Next steps
-In this tutorial, you created an NSX-T network segment to use for VMs in vCenter Server.
+In this tutorial, you created an NSX-T Data Center network segment to use for VMs in vCenter Server.
You can now:
baremetal-infrastructure Supported Instances And Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/supported-instances-and-regions.md
Learn about instances and regions supported for NC2 on Azure.
Nutanix Clusters on Azure supports: * Minimum of three bare metal nodes per cluster.
-* Maximum of 16 bare metal nodes.
+* Maximum of 13 bare metal nodes.
* Only the Nutanix AHV hypervisor on Nutanix clusters running in Azure. * Prism Central instance deployed on Nutanix Clusters on Azure to manage the Nutanix clusters in Azure.
cloud-shell Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/features.md
Title: Azure Cloud Shell features | Microsoft Docs
description: Overview of features in Azure Cloud Shell documentationcenter: ''-+ tags: azure-resource-manager
vm-linux Previously updated : 04/26/2019- Last updated : 09/20/2022+ # Features & tools for Azure Cloud Shell [!INCLUDE [features-introblock](../../includes/cloud-shell-features-introblock.md)]
-Azure Cloud Shell runs on `Common Base Linux Delridge`.
+Azure Cloud Shell runs on **Common Base Linux - Mariner** (CBL-Mariner),
+Microsoft's Linux distribution for cloud-infrastructure-edge products and services.
+
+Microsoft internally compiles all the packages included in the **CBL-Mariner** repository to help
+guard against supply chain attacks. Tooling has been updated to reflect the new base image
+CBL-Mariner. You can get a full list of installed package versions using the following command:
+`tdnf list installed`. If these changes affected your Cloud Shell environment, please contact
+Azuresupport or create an issue in the
+[Cloud Shell repository](https://github.com/Azure/CloudShell/issues).
## Features
cognitive-services Improve Accuracy Phrase List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/improve-accuracy-phrase-list.md
Now try Speech Studio to see how phrase list can improve recognition accuracy.
## Implement phrase list ::: zone pivot="programming-language-csharp"
-With the [Speech SDK](speech-sdk.md) you can add phrases individually and then run speech recognition. Then you can optionally clear or update the phrase list to take effect before the next recognition.
+With the [Speech SDK](speech-sdk.md) you can add phrases individually and then run speech recognition.
```csharp var phraseList = PhraseListGrammar.FromRecognizer(recognizer); phraseList.AddPhrase("Contoso"); phraseList.AddPhrase("Jessie"); phraseList.AddPhrase("Rehaan");
-phraseList.Clear();
``` ::: zone-end ::: zone pivot="programming-language-cpp"
-With the [Speech SDK](speech-sdk.md) you can add phrases individually and then run speech recognition. Then you can optionally clear or update the phrase list to take effect before the next recognition.
+With the [Speech SDK](speech-sdk.md) you can add phrases individually and then run speech recognition.
```cpp auto phraseListGrammar = PhraseListGrammar::FromRecognizer(recognizer); phraseListGrammar->AddPhrase("Contoso"); phraseListGrammar->AddPhrase("Jessie"); phraseListGrammar->AddPhrase("Rehaan");
-phraseListGrammar->Clear();
``` ::: zone-end ::: zone pivot="programming-language-java"
-With the [Speech SDK](speech-sdk.md) you can add phrases individually and then run speech recognition. Then you can optionally clear or update the phrase list to take effect before the next recognition.
+With the [Speech SDK](speech-sdk.md) you can add phrases individually and then run speech recognition.
```java PhraseListGrammar phraseList = PhraseListGrammar.fromRecognizer(recognizer); phraseList.addPhrase("Contoso"); phraseList.addPhrase("Jessie"); phraseList.addPhrase("Rehaan");
-phraseList.clear();
``` ::: zone-end ::: zone pivot="programming-language-javascript"
-With the [Speech SDK](speech-sdk.md) you can add phrases individually and then run speech recognition. Then you can optionally clear or update the phrase list to take effect before the next recognition.
+With the [Speech SDK](speech-sdk.md) you can add phrases individually and then run speech recognition.
```javascript const phraseList = sdk.PhraseListGrammar.fromRecognizer(recognizer); phraseList.addPhrase("Contoso"); phraseList.addPhrase("Jessie"); phraseList.addPhrase("Rehaan");
-phraseList.clear();
``` ::: zone-end ::: zone pivot="programming-language-python"
-With the [Speech SDK](speech-sdk.md) you can add phrases individually and then run speech recognition. Then you can optionally clear or update the phrase list to take effect before the next recognition.
+With the [Speech SDK](speech-sdk.md) you can add phrases individually and then run speech recognition.
```Python phrase_list_grammar = speechsdk.PhraseListGrammar.from_recognizer(reco) phrase_list_grammar.addPhrase("Contoso") phrase_list_grammar.addPhrase("Jessie") phrase_list_grammar.addPhrase("Rehaan")
-phrase_list_grammar.clear()
``` ::: zone-end
cognitive-services Cognitive Services Environment Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-environment-variables.md
For more information, see <a href="/cpp/c-runtime-library/reference/getenv-s-wge
#include <iostream> #include <stdlib.h>
-std::string getEnvironmentVariable(const char* name);
+std::string GetEnvironmentVariable(const char* name);
int main() { // Get the named env var, and assign it to the value variable
- auto value = getEnvironmentVariable("ENVIRONMENT_VARIABLE_KEY");
+ auto value = GetEnvironmentVariable("ENVIRONMENT_VARIABLE_KEY");
}
-std::string getEnvironmentVariable(const char* name)
+std::string GetEnvironmentVariable(const char* name)
{ #if defined(_MSC_VER) size_t requiredSize = 0;
container-apps Custom Domains Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/custom-domains-certificates.md
Azure Container Apps allows you to bind one or more custom domains to a containe
Once the add operation is complete, you see your domain name in the list of custom domains.
+> [!NOTE]
+> For container apps in internal Container Apps environments, [additional configuration](./networking.md#dns) is required to use custom domains with VNET-scope ingress.
+ ## Managing certificates You can manage certificates via the Container Apps environment or through an individual container app.
You can manage your certificates for an individual domain name by selecting the
## Next steps > [!div class="nextstepaction"]
-> [Authentication in Azure Container Apps](authentication.md)
+> [Authentication in Azure Container Apps](authentication.md)
container-apps Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress.md
Title: Set up HTTPS ingress in Azure Container Apps
+ Title: Set up HTTPS or TCP ingress in Azure Container Apps
description: Enable public and private endpoints in your app with Azure Container Apps Previously updated : 11/02/2021 Last updated : 09/29/2022
-# Set up HTTPS ingress in Azure Container Apps
+# Set up HTTPS or TCP ingress in Azure Container Apps
-Azure Container Apps allows you to expose your container app to the public web by enabling ingress. When you enable ingress, you don't need to create an Azure Load Balancer, public IP address, or any other Azure resources to enable incoming HTTPS requests.
+Azure Container Apps allows you to expose your container app to the public web, to your VNET, or to other container apps within your environment by enabling ingress. When you enable ingress, you don't need to create an Azure Load Balancer, public IP address, or any other Azure resources to enable incoming HTTPS requests.
-With ingress enabled, your container app features the following characteristics:
+Each container app can be configured with different ingress settings. For example, you can have one container app that is exposed to the public web and another that is only accessible from within your Container Apps environment.
+
+## Ingress types
+
+Azure Container Apps supports two types of ingress: HTTPS and TCP.
+
+### HTTPS
+
+With HTTPS ingress enabled, your container app features the following characteristics:
- Supports TLS termination - Supports HTTP/1.1 and HTTP/2 - Supports WebSocket and gRPC - HTTPS endpoints always use TLS 1.2, terminated at the ingress point-- Endpoints always expose ports 80 (for HTTP) and 443 (for HTTPS).
- - By default, HTTP requests to port 80 are automatically redirected to HTTPS on 443.
-- Request timeout is 240 seconds.
+- Endpoints always expose ports 80 (for HTTP) and 443 (for HTTPS)
+ - By default, HTTP requests to port 80 are automatically redirected to HTTPS on 443
+- The container app is accessed via its fully qualified domain name (FQDN)
+- Request timeout is 240 seconds
+
+### <a name="tcp"></a>TCP (preview)
+
+TCP ingress is useful for exposing container apps that use a TCP-based protocol other than HTTP or HTTPS.
+
+> [!NOTE]
+> TCP ingress is in public preview and is only supported in Container Apps environments that use a [custom VNET](vnet-custom.md).
+>
+> To enable TCP ingress, use ARM or Bicep (API version `2022-06-01-preview` or above), or the Azure CLI.
+
+With TCP ingress enabled, your container app features the following characteristics:
+
+- The container app is accessed via its fully qualified domain name (FQDN) and exposed port number
+- Other container apps in the same environment can also access a TCP ingress-enabled container app by using its name (defined by the `name` property in the Container Apps resource) and exposed port number
## Configuration
The ingress configuration section has the following form:
"ingress": { "external": true, "targetPort": 80,
- "transport": auto
+ "transport": "auto"
} } }
The following settings are available when configuring ingress:
| Property | Description | Values | Required | |||||
-| `external` | The ingress IP and fully qualified domain name (FQDN) can either be accessible externally from the internet or a VNET, or internally within the app environment only. | `true` for external visibility from the internet or a VNET, `false` for internal visibility within app environment only (default) | Yes |
+| `external` | Whether your ingress-enabled app is accessible outside its Container Apps environment. |`true` for visibility from internet or VNET, depending on app environment endpoint configured, `false` for visibility within app environment only. (default) | Yes |
| `targetPort` | The port your container listens to for incoming requests. | Set this value to the port number that your container uses. Your application ingress endpoint is always exposed on port `443`. | Yes |
-| `transport` | You can use either HTTP/1.1 or HTTP/2, or you can set it to automatically detect the transport type. | `http` for HTTP/1, `http2` for HTTP/2, `auto` to automatically detect the transport type (default) | No |
+| `exposedPort` | (TCP ingress only) The port used to access the app. If `external` is `true`, the value must be unique in the Container Apps environment and cannot be `80` or `443`. | A port number from `1` to `65535`. | No |
+| `transport` | The transport type. | `http` for HTTP/1, `http2` for HTTP/2, `auto` to automatically detect HTTP/1 or HTTP/2 (default), `tcp` for TCP. | No |
| `allowInsecure` | Allows insecure traffic to your container app. | `false` (default), `true`<br><br>If set to `true`, HTTP requests to port 80 aren't automatically redirected to port 443 using HTTPS, allowing insecure connections. | No | > [!NOTE]
-> To disable ingress for your application, you can omit the `ingress` configuration property entirely.
+> To disable ingress for your application, omit the `ingress` configuration property entirely.
## IP addresses and domain names
With ingress enabled, your application is assigned a fully qualified domain name
| External | `<APP_NAME>.<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`| | Internal | `<APP_NAME>.internal.<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io` |
-Your Container Apps environment has a single public IP address for applications with `external` ingress visibility, and a single internal IP address for applications with `internal` ingress visibility. Therefore, all applications within a Container Apps environment with `external` ingress visibility share a single public IP address. Similarly, all applications within a Container Apps environment with `internal` ingress visibility share a single internal IP address. HTTP traffic is routed to individual applications based on the FQDN in the host header.
+For HTTP ingress, traffic is routed to individual applications based on the FQDN in the host header.
+
+For TCP ingress, traffic is routed to individual applications based on the FQDN and its *exposed* port number. Other container apps in the same environment can also access a TCP ingress-enabled container app by using its name (defined by the container app's `name` property) and its *exposedPort* number.
+
+For applications with external ingress visibility, the following conditions apply:
+- An internal Container Apps environment has a single private IP address for applications. For container apps in internal environments, you must configure [DNS](./networking.md#dns) for VNET-scope ingress.
+- An external Container Apps environment or Container Apps environment that is not in a VNET has a single public IP address for applications.
You can get access to the environment's unique identifier by querying the environment settings. [!INCLUDE [container-apps-get-fully-qualified-domain-name](../../includes/container-apps-get-fully-qualified-domain-name.md)]
+## Next steps
+ > [!div class="nextstepaction"] > [Manage scaling](scale-app.md)
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
Previously updated : 06/10/2022 Last updated : 09/27/2022
There are two scale properties that apply to all rules in your container app:
Azure Container Apps supports the following scale triggers: - [HTTP traffic](#http): Scaling based on the number of concurrent HTTP requests to your revision.
+- [TCP traffic](#tcp): Scaling based on the number of concurrent TCP requests to your revision.
- [Event-driven](#event-driven): Event-based triggers such as messages in an Azure Service Bus. - [CPU](#cpu) or [Memory](#memory) usage: Scaling based on the amount of CPU or memory consumed by a replica.
In the following example, the container app scales out up to five replicas and c
:::image type="content" source="media/scalers/create-http-scale-rule.png" alt-text="A screenshot showing the newly created http scale rule.":::
+## TCP
+
+With a TCP scaling rule, you have control over the threshold that determines when to scale out.
+
+| Scale property | Description | Default value | Min value | Max value |
+||||||
+| `concurrentRequests`| When the number of requests exceeds this value, then another replica is added. Replicas will continue to be added up to the `maxReplicas` amount as the number of concurrent requests increase. | 10 | 1 | n/a |
+
+In the following example, the container app scales out up to five replicas and can scale down to zero. The scaling threshold is set to 100 concurrent requests per second.
+
+```json
+{
+ ...
+ "resources": {
+ ...
+ "properties": {
+ ...
+ "template": {
+ ...
+ "scale": {
+ "minReplicas": 0,
+ "maxReplicas": 5,
+ "rules": [{
+ "name": "tcp-rule",
+ "tcp": {
+ "metadata": {
+ "concurrentRequests": "100"
+ }
+ }
+ }]
+ }
+ }
+ }
+ }
+}
+```
+ ## Event-driven Container Apps can scale based of a wide variety of event types. Any event supported by [KEDA](https://keda.sh/docs/scalers/) is supported in Container Apps.
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
$VnetName = 'my-custom-vnet'
Now create an instance of the virtual network to associate with the Container Apps environment. The virtual network must have two subnets available for the container app instance. > [!NOTE]
-> You can use an existing virtual network, but two empty subnets are required to use with Container Apps.
+> You can use an existing virtual network, but a dedicated subnet with a CIDR range of `/23` or larger is required for use with Container Apps.
# [Bash](#tab/bash)
You must either provide values for all three of these properties, or none of the
If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the **my-container-apps** resource group. Deleting this resource group will also delete the resource group automatically created by the Container Apps service containing the custom network components. + >[!CAUTION] > The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this guide exist in the specified resource group, they will also be deleted. - # [Bash](#tab/bash) ```azurecli
Remove-AzResourceGroup -Name $ResourceGroupName -Force
## Additional resources -- For more information about configuring your private endpoints, see [What is Azure Private Endpoint](../private-link/private-endpoint-overview.md).--- To set up DNS name resolution for internal services, you must [set up your own DNS server](../dns/index.yml).
+- To use VNET-scope ingress, you must set up [DNS](./networking.md#dns).
## Next steps
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
The following example shows you how to create a Container Apps environment in an
[!INCLUDE [container-apps-create-portal-steps.md](../../includes/container-apps-create-portal-steps.md)] > [!NOTE]
-> Network address prefixes requires a CIDR range of `/23`.
+> Network address prefixes requires a CIDR range of `/23` or larger (`/23`, `/22` etc.).
7. Select the **Networking** tab to create a VNET. 8. Select **Yes** next to *Use your own virtual network*.
$VnetName = 'my-custom-vnet'
Now create an Azure virtual network to associate with the Container Apps environment. The virtual network must have a subnet available for the environment deployment. > [!NOTE]
-> You can use an existing virtual network, but a dedicated subnet with a CDIR range of `/23` is required for use with Container Apps.
+> You can use an existing virtual network, but a dedicated subnet with a CIDR range of `/23` or larger is required for use with Container Apps.
# [Bash](#tab/bash)
You must either provide values for all three of these properties, or none of the
If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the **my-container-apps** resource group. Deleting this resource group will also delete the resource group automatically created by the Container Apps service containing the custom network components. + >[!CAUTION] > The following command deletes the specified resource group and all resources contained within it. If resources outside the scope of this guide exist in the specified resource group, they will also be deleted. - # [Bash](#tab/bash) ```azurecli
Remove-AzResourceGroup -Name $ResourceGroupName -Force
::: zone-end
-## Additional resources
--- For more information about configuring your private endpoints, see [What is Azure Private Endpoint](../private-link/private-endpoint-overview.md).-- To set up DNS name resolution for internal services, you must [set up your own DNS server](../dns/index.yml).- ## Next steps > [!div class="nextstepaction"]
container-registry Container Registry Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-geo-replication.md
Title: Geo-replicate a registry
-description: Get started creating and managing a geo-replicated Azure container registry, which enables the registry to serve multiple regions with multi-master regional replicas. Geo-replication is a feature of the Premium service tier.
+description: Get started creating and managing a geo-replicated Azure container registry, which enables the registry to serve multiple regions with multi-primary regional replicas. Geo-replication is a feature of the Premium service tier.
Last updated 06/28/2021
# Geo-replication in Azure Container Registry
-Companies that want a local presence, or a hot backup, choose to run services from multiple Azure regions. As a best practice, placing a container registry in each region where images are run allows network-close operations, enabling fast, reliable image layer transfers. Geo-replication enables an Azure container registry to function as a single registry, serving multiple regions with multi-master regional registries.
+Companies that want a local presence, or a hot backup, choose to run services from multiple Azure regions. As a best practice, placing a container registry in each region where images are run allows network-close operations, enabling fast, reliable image layer transfers. Geo-replication enables an Azure container registry to function as a single registry, serving multiple regions with multi-primary regional registries.
A geo-replicated registry provides the following benefits:
container-registry Container Registry Soft Delete Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-soft-delete-policy.md
For example, after five days of soft deleting the artifact, if the user changes
## Known issues
->* Enabling the soft delete policy with AZ through ARM template leaves the registry stuck in the `creation` state. To avoid this, we recommend deleting and recreating the registry by disabling the soft delete policy.
+>* Enabling the soft delete policy with AZ through ARM template leaves the registry stuck in the `creation` state. If you see this error, please delete and recreate the registry disabling either soft delete policy or Geo-replication on the registry.
>* Accessing the manage deleted artifacts blade after disabling the soft delete policy will throw an error message with 405 status. >* The customers with restrictions on permissions to restore, will see an issue as File not found. ## Enable soft delete policy for registry - CLI
container-registry Container Registry Transfer Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-troubleshooting.md
Title: ACR Transfer Troubleshooting description: Troubleshoot ACR Transfer Previously updated : 11/18/2021 Last updated : 09/24/2022
* **AzCopy issues** * See [Troubleshoot AzCopy issues](../storage/common/storage-use-azcopy-configure.md). * **Artifacts transfer problems**
- * Not all artifacts, or none, are transferred. Confirm spelling of artifacts in export run, and name of blob in export and import runs. Confirm you are transferring a maximum of 50 artifacts.
+ * Not all artifacts, or none, are transferred. Confirm spelling of artifacts in export run, and name of blob in export and import runs. Confirm you're transferring a maximum of 50 artifacts.
* Pipeline run might not have completed. An export or import run can take some time. * For other pipeline issues, provide the deployment [correlation ID](../azure-resource-manager/templates/deployment-history.md) of the export run or import run to the Azure Container Registry team.
+ * To create ACR Transfer resources such as `exportPipelines`,` importPipelines`, and `pipelineRuns`, the user must have at least Contributor access on the ACR subscription. Otherwise, they'll see authorization to perform the transfer denied or scope is invalid errors.
* **Problems pulling the image in a physically isolated environment**
- * If you see errors regarding foreign layers or attempts to resolve mcr.microsoft.com when attempting to pull an image in a physically isolated environment, your image manifest likely has non-distributable layers. Due to the nature of a physically isolated environment, these images will often fail to pull. You can confirm that this is the case by checking the image manifest for any references to external registries. If this is the case, you will need to push the non-distributable layers to your public cloud ACR prior to deploying an export pipeline-run for that image. For guidance on how to do this, see [How do I push non-distributable layers to a registry?](./container-registry-faq.yml#how-do-i-push-non-distributable-layers-to-a-registry-)
+ * If you see errors regarding foreign layers or attempts to resolve mcr.microsoft.com when attempting to pull an image in a physically isolated environment, your image manifest likely has non-distributable layers. Due to the nature of a physically isolated environment, these images will often fail to pull. You can confirm that this is the case by checking the image manifest for any references to external registries. If so, you'll need to push the non-distributable layers to your public cloud ACR prior to deploying an export pipeline-run for that image. For guidance on how to do this, see [How do I push non-distributable layers to a registry?](./container-registry-faq.yml#how-do-i-push-non-distributable-layers-to-a-registry-)
<!-- LINKS - External --> [terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
cosmos-db Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/role-based-access-control.md
The following are the built-in roles supported by Azure Cosmos DB:
||| |[DocumentDB Account Contributor](../role-based-access-control/built-in-roles.md#documentdb-account-contributor)|Can manage Azure Cosmos DB accounts.| |[Cosmos DB Account Reader](../role-based-access-control/built-in-roles.md#cosmos-db-account-reader-role)|Can read Azure Cosmos DB account data.|
-|[Cosmos Backup Operator](../role-based-access-control/built-in-roles.md#cosmosbackupoperator)| Can submit a restore request for Azure portal for a periodic backup enabled database or a container. Can modify the backup interval and retention on the Azure portal. Cannot access any data or use Data Explorer. |
-| [CosmosRestoreOperator](../role-based-access-control/built-in-roles.md) | Can perform restore action for Azure Cosmos DB account with continuous backup mode.|
-|[Cosmos DB Operator](../role-based-access-control/built-in-roles.md#cosmos-db-operator)|Can provision Azure Cosmos accounts, databases, and containers. Cannot access any data or use Data Explorer.|
+|[CosmosBackupOperator](../role-based-access-control/built-in-roles.md#cosmosbackupoperator)| Can submit a restore request in the Azure portal for a periodic backup enabled database or a container. Can modify the backup interval and retention in the Azure portal. Cannot access any data or use Data Explorer. |
+| [CosmosRestoreOperator](../role-based-access-control/built-in-roles.md#cosmosrestoreoperator) | Can perform a restore action for an Azure Cosmos DB account with continuous backup mode.|
+|[Cosmos DB Operator](../role-based-access-control/built-in-roles.md#cosmos-db-operator)|Can provision Azure Cosmos DB accounts, databases, and containers. Cannot access any data or use Data Explorer.|
## Identity and access management (IAM)
data-factory Copy Data Tool Metadata Driven https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-data-tool-metadata-driven.md
Each row in control table contains the metadata for one object (for example, one
| Column name | Description | |: |: |
-| Id | Unique ID of the object to be copied. |
+| ID | Unique ID of the object to be copied. |
| SourceObjectSettings | Metadata of source dataset. It can be schema name, table name etc. Here is an [example](connector-azure-sql-database.md#dataset-properties). | | SourceConnectionSettingsName | The name of the source connection setting in connection control table. It is optional. | | CopySourceSettings | Metadata of source property in copy activity. It can be query, partitions etc. Here is an [example](connector-azure-sql-database.md#azure-sql-database-as-the-source). |
This pipeline will copy objects from one group. The objects belonging to this gr
## Next steps Try these tutorials that use the Copy Data tool: -- [Quickstart: create a data factory using the Copy Data tool](quickstart-create-data-factory-copy-data-tool.md)-- [Tutorial: copy data in Azure using the Copy Data tool](tutorial-copy-data-tool.md) -- [Tutorial: copy on-premises data to Azure using the Copy Data tool](tutorial-hybrid-copy-data-tool.md)
+- [Quickstart: Create a data factory using the Copy Data tool](quickstart-hello-world-copy-data-tool.md)
+- [Tutorial: Copy data in Azure using the Copy Data tool](tutorial-copy-data-tool.md)
+- [Tutorial: Copy on-premises data to Azure using the Copy Data tool](tutorial-hybrid-copy-data-tool.md)
data-factory Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/copy-data-tool.md
The following table provides guidance on when to use the Copy Data tool vs. per-
| You want to easily build a data loading task without learning about entities (linked services, datasets, pipelines, etc.) | You want to implement complex and flexible logic for loading data into lake. | | You want to quickly load a large number of data artifacts into a data lake. | You want to chain Copy activity with subsequent activities for cleansing or processing data. |
-To start the Copy Data tool, click the **Ingest** tile on the home page of your the Data Factory or Synapse Studio UI.
+To start the Copy Data tool, click the **Ingest** tile on the home page of the Data Factory or Synapse Studio UI.
# [Azure Data Factory](#tab/data-factory) :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the home page - link to Copy Data tool.":::
A one-time copy operation enables data movement from a source to a destination o
## Next steps Try these tutorials that use the Copy Data tool: -- [Quickstart: create a data factory using the Copy Data tool](quickstart-create-data-factory-copy-data-tool.md)-- [Tutorial: copy data in Azure using the Copy Data tool](tutorial-copy-data-tool.md) -- [Tutorial: copy on-premises data to Azure using the Copy Data tool](tutorial-hybrid-copy-data-tool.md)
+- [Quickstart: Create a data factory using the Copy Data tool](quickstart-hello-world-copy-data-tool.md)
+- [Tutorial: Copy data in Azure using the Copy Data tool](tutorial-copy-data-tool.md)
+- [Tutorial: Copy on-premises data to Azure using the Copy Data tool](tutorial-hybrid-copy-data-tool.md)
data-factory Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/introduction.md
Here are important next step documents to explore:
- [Integration runtime](concepts-integration-runtime.md) - [Mapping Data Flows](concepts-data-flow-overview.md) - [Data Factory UI in the Azure portal](quickstart-create-data-factory-portal.md)-- [Copy Data tool in the Azure portal](quickstart-create-data-factory-copy-data-tool.md)
+- [Copy Data tool in the Azure portal](quickstart-hello-world-copy-data-tool.md)
- [PowerShell](quickstart-create-data-factory-powershell.md) - [.NET](quickstart-create-data-factory-dot-net.md) - [Python](quickstart-create-data-factory-python.md)
data-factory Quickstart Create Data Factory Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-copy-data-tool.md
- Title: Copy data by using the Azure Copy Data tool
-description: Create an Azure Data Factory and then use the Copy Data tool to copy data from one location in Azure Blob storage to another location.
----- Previously updated : 07/05/2021---
-# Quickstart: Use the Copy Data tool in the Azure Data Factory Studio to copy data
-
-> [!div class="op_single_selector" title1="Select the version of Data Factory service that you are using:"]
-> * [Version 1](v1/data-factory-copy-data-from-azure-blob-storage-to-sql-database.md)
-> * [Current version](quickstart-create-data-factory-copy-data-tool.md)
--
-In this quickstart, you use the Azure portal to create a data factory. Then, you use the Copy Data tool to create a pipeline that copies data from a folder in Azure Blob storage to another folder.
-
-> [!NOTE]
-> If you are new to Azure Data Factory, see [Introduction to Azure Data Factory](introduction.md) before doing this quickstart.
--
-## Create a data factory
-
-1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, Data Factory UI is supported only in Microsoft Edge and Google Chrome web browsers.
-1. Go to the [Azure portal](https://portal.azure.com).
-1. From the Azure portal menu, select **Create a resource** > **Integration** > **Data Factory**:
-
- :::image type="content" source="./media/doc-common-process/new-azure-data-factory-menu.png" alt-text="New data factory creation":::
-
-1. On the **New data factory** page, enter **ADFTutorialDataFactory** for **Name**.
-
- The name of the Azure Data Factory must be *globally unique*. If you see the following error, change the name of the data factory (for example, **&lt;yourname&gt;ADFTutorialDataFactory**) and try creating again. For naming rules for Data Factory artifacts, see the [Data Factory - naming rules](naming-rules.md) article.
-
- :::image type="content" source="./media/doc-common-process/name-not-available-error.png" alt-text="Error when a name is not available":::
-1. For **Subscription**, select your Azure subscription in which you want to create the data factory.
-1. For **Resource Group**, use one of the following steps:
-
- - Select **Use existing**, and select an existing resource group from the list.
- - Select **Create new**, and enter the name of a resource group.
-
- To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
-1. For **Version**, select **V2**.
-1. For **Location**, select the location for the data factory.
-
- The list shows only locations that Data Factory supports, and where your Azure Data Factory meta data will be stored. The associated data stores (like Azure Storage and Azure SQL Database) and computes (like Azure HDInsight) that Data Factory uses can run in other regions.
-
-1. Select **Create**.
-
-1. After the creation is complete, you see the **Data Factory** page. Select **Open** on the **Open Azure Data Factory Studio** tile to start the Azure Data Factory user interface (UI) application on a separate tab.
-
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
-
-## Start the Copy Data tool
-
-1. On the home page of Azure Data Factory, select the **Ingest** tile to start the Copy Data tool.
-
- :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the Azure Data Factory home page.":::
-
-1. On the **Properties** page of the Copy Data tool, choose **Built-in copy task** under **Task type**, then select **Next**.
-
- :::image type="content" source="./media/quickstart-create-data-factory-copy-data-tool/copy-data-tool-properties-page.png" alt-text="&quot;Properties&quot; page":::
-
-1. On the **Source data store** page, complete the following steps:
-
- 1. Click **+ Create new connection** to add a connection.
-
- 1. Select the linked service type that you want to create for the source connection. In this tutorial, we use **Azure Blob Storage**. Select it from the gallery, and then select **Continue**.
-
- :::image type="content" source="./media/quickstart-create-data-factory-copy-data-tool/select-blob-source.png" alt-text="Select Blob":::
-
- 1. On the **New connection (Azure Blob Storage)** page, specify a name for your connection. Select your Azure subscription from the **Azure subscription** list and your storage account from the **Storage account name** list, test connection, and then select **Create**.
-
- :::image type="content" source="./media/quickstart-create-data-factory-copy-data-tool/configure-blob-storage.png" alt-text="Configure the Azure Blob storage account":::
-
- 1. Select the newly created connection in the **Connection** block.
- 1. In the **File or folder** section, select **Browse** to navigate to the **adftutorial/input** folder, select the **emp.txt** file, and then click **OK**.
- 1. Select the **Binary copy** checkbox to copy file as-is, and then select **Next**.
-
- :::image type="content" source="./media/quickstart-create-data-factory-copy-data-tool/source-data-store.png" alt-text="Screenshot that shows the Source data store page.":::
-
-1. On the **Destination data store** page, complete the following steps:
- 1. Select the **AzureBlobStorage** connection that you created in the **Connection** block.
-
- 1. In the **Folder path** section, enter **adftutorial/output** for the folder path.
-
- :::image type="content" source="./media/quickstart-create-data-factory-copy-data-tool/destination-data-store.png" alt-text="Screenshot that shows the Destination data store page.":::
-
- 1. Leave other settings as default and then select **Next**.
-
-1. On the **Settings** page, specify a name for the pipeline and its description, then select **Next** to use other default configurations.
-
- :::image type="content" source="./media/quickstart-create-data-factory-copy-data-tool/settings.png" alt-text="Screenshot that shows the settings page.":::
-
-1. On the **Summary** page, review all settings, and select **Next**.
-
-1. On the **Deployment complete** page, select **Monitor** to monitor the pipeline that you created.
-
- :::image type="content" source="./media/quickstart-create-data-factory-copy-data-tool/deployment-page.png" alt-text="&quot;Deployment complete&quot; page":::
-
-1. The application switches to the **Monitor** tab. You see the status of the pipeline on this tab. Select **Refresh** to refresh the list. Click the link under **Pipeline name** to view activity run details or rerun the pipeline.
-
- :::image type="content" source="./media/quickstart-create-data-factory-copy-data-tool/refresh-pipeline.png" alt-text="Refresh pipeline":::
-
-1. On the Activity runs page, select the **Details** link (eyeglasses icon) under the **Activity name** column for more details about copy operation. For details about the properties, see [Copy Activity overview](copy-activity-overview.md).
-
-1. To go back to the Pipeline Runs view, select the **All pipeline runs** link in the breadcrumb menu. To refresh the view, select **Refresh**.
-
-1. Verify that the **emp.txt** file is created in the **output** folder of the **adftutorial** container. If the output folder doesn't exist, the Data Factory service automatically creates it.
-
-1. Switch to the **Author** tab above the **Monitor** tab on the left panel so that you can edit linked services, datasets, and pipelines. To learn about editing them in the Data Factory UI, see [Create a data factory by using the Azure portal](quickstart-create-data-factory-portal.md).
-
- :::image type="content" source="./media/quickstart-create-data-factory-copy-data-tool/select-author.png" alt-text="Select Author tab":::
-
-## Next steps
-The pipeline in this sample copies data from one location to another location in Azure Blob storage. To learn about using Data Factory in more scenarios, go through the [tutorials](tutorial-copy-data-portal.md).
data-factory Quickstart Create Data Factory Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-portal.md
- Title: Create an Azure data factory using the Azure Data Factory UI
-description: Create a data factory with a pipeline that copies data from one location in Azure Blob storage to another location.
---- Previously updated : 07/05/2021----
-# Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio
-
-> [!div class="op_single_selector" title1="Select the version of Data Factory service that you are using:"]
-> * [Version 1](v1/data-factory-copy-data-from-azure-blob-storage-to-sql-database.md)
-> * [Current version](quickstart-create-data-factory-portal.md)
--
-This quickstart describes how to use the Azure Data Factory UI to create and monitor a data factory. The pipeline that you create in this data factory *copies* data from one folder to another folder in Azure Blob storage. To *transform* data by using Azure Data Factory, see [Mapping data flow](concepts-data-flow-overview.md).
-
-> [!NOTE]
-> If you are new to Azure Data Factory, see [Introduction to Azure Data Factory](introduction.md) before doing this quickstart.
--
-### Video
-Watching this video helps you understand the Data Factory UI:
->[!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Visually-build-pipelines-for-Azure-Data-Factory-v2/Player]
-
-## Create a data factory
-
-1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, Data Factory UI is supported only in Microsoft Edge and Google Chrome web browsers.
-1. Go to the [Azure portal](https://portal.azure.com).
-1. From the Azure portal menu, select **Create a resource**.
-1. Select **Integration**, and then select **Data Factory**.
-
- :::image type="content" source="./media/doc-common-process/new-azure-data-factory-menu.png" alt-text="Data Factory selection in the New pane.":::
-
-1. On the **Create Data Factory** page, under **Basics** tab, select your Azure **Subscription** in which you want to create the data factory.
-1. For **Resource Group**, take one of the following steps:
-
- a. Select an existing resource group from the drop-down list.
-
- b. Select **Create new**, and enter the name of a new resource group.
-
- To learn about resource groups, see [Use resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
-
-1. For **Region**, select the location for the data factory.
-
- The list shows only locations that Data Factory supports, and where your Azure Data Factory meta data will be stored. The associated data stores (like Azure Storage and Azure SQL Database) and computes (like Azure HDInsight) that Data Factory uses can run in other regions.
-
-1. For **Name**, enter **ADFTutorialDataFactory**.
- The name of the Azure data factory must be *globally unique*. If you see the following error, change the name of the data factory (for example, **&lt;yourname&gt;ADFTutorialDataFactory**) and try creating again. For naming rules for Data Factory artifacts, see the [Data Factory - naming rules](naming-rules.md) article.
-
- :::image type="content" source="./media/doc-common-process/name-not-available-error.png" alt-text="New data factory error message for duplicate name.":::
-
-1. For **Version**, select **V2**.
-
-1. Select **Next: Git configuration**, and then select **Configure Git later** check box.
-
-1. Select **Review + create**, and select **Create** after the validation is passed. After the creation is complete, select **Go to resource** to navigate to the **Data Factory** page.
-
-1. Select **Open** on the **Open Azure Data Factory Studio** tile to start the Azure Data Factory user interface (UI) application on a separate browser tab.
-
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
-
- > [!NOTE]
- > If you see that the web browser is stuck at "Authorizing", clear the **Block third-party cookies and site data** check box. Or keep it selected, create an exception for **login.microsoftonline.com**, and then try to open the app again.
-
-
-## Create a linked service
-In this procedure, you create a linked service to link your Azure Storage account to the data factory. The linked service has the connection information that the Data Factory service uses at runtime to connect to it.
-
-1. On the Azure Data Factory UI page, open [**Manage**](./author-management-hub.md) tab from the left pane.
-
-1. On the Linked services page, select **+New** to create a new linked service.
-
- :::image type="content" source="./media/doc-common-process/new-linked-service.png" alt-text="New linked service.":::
-
-1. On the **New Linked Service** page, select **Azure Blob Storage**, and then select **Continue**.
-
-1. On the New Linked Service (Azure Blob Storage) page, complete the following steps:
-
- a. For **Name**, enter **AzureStorageLinkedService**.
-
- b. For **Storage account name**, select the name of your Azure Storage account.
-
- c. Select **Test connection** to confirm that the Data Factory service can connect to the storage account.
-
- d. Select **Create** to save the linked service.
-
- :::image type="content" source="./media/quickstart-create-data-factory-portal/linked-service.png" alt-text="Linked service.":::
--
-## Create datasets
-In this procedure, you create two datasets: **InputDataset** and **OutputDataset**. These datasets are of type **AzureBlob**. They refer to the Azure Storage linked service that you created in the previous section.
-
-The input dataset represents the source data in the input folder. In the input dataset definition, you specify the blob container (**adftutorial**), the folder (**input**), and the file (**emp.txt**) that contain the source data.
-
-The output dataset represents the data that's copied to the destination. In the output dataset definition, you specify the blob container (**adftutorial**), the folder (**output**), and the file to which the data is copied. Each run of a pipeline has a unique ID associated with it. You can access this ID by using the system variable **RunId**. The name of the output file is dynamically evaluated based on the run ID of the pipeline.
-
-In the linked service settings, you specified the Azure Storage account that contains the source data. In the source dataset settings, you specify where exactly the source data resides (blob container, folder, and file). In the sink dataset settings, you specify where the data is copied to (blob container, folder, and file).
-
-1. Select **Author** tab from the left pane.
-
-1. Select the **+** (plus) button, and then select **Dataset**.
-
- :::image type="content" source="./media/quickstart-create-data-factory-portal/new-dataset-menu.png" alt-text="Menu for creating a dataset.":::
-
-1. On the **New Dataset** page, select **Azure Blob Storage**, and then select **Continue**.
-
-1. On the **Select Format** page, choose the format type of your data, and then select **Continue**. In this case, select **Binary** when copy files as-is without parsing the content.
-
- :::image type="content" source="./media/quickstart-create-data-factory-portal/select-format.png" alt-text="Select format.":::
-
-1. On the **Set Properties** page, complete following steps:
-
- a. Under **Name**, enter **InputDataset**.
-
- b. For **Linked service**, select **AzureStorageLinkedService**.
-
- c. For **File path**, select the **Browse** button.
-
- d. In the **Choose a file or folder** window, browse to the **input** folder in the **adftutorial** container, select the **emp.txt** file, and then select **OK**.
-
- e. Select **OK**.
-
- :::image type="content" source="./media/quickstart-create-data-factory-portal/set-properties-for-inputdataset.png" alt-text="Set properties for InputDataset.":::
-
-1. Repeat the steps to create the output dataset:
-
- a. Select the **+** (plus) button, and then select **Dataset**.
-
- b. On the **New Dataset** page, select **Azure Blob Storage**, and then select **Continue**.
-
- c. On the **Select Format** page, choose the format type of your data, and then select **Continue**.
-
- d. On the **Set Properties** page, specify **OutputDataset** for the name. Select **AzureStorageLinkedService** as linked service.
-
- e. Under **File path**, enter **adftutorial/output**. If the **output** folder doesn't exist, the copy activity creates it at runtime.
-
- f. Select **OK**.
-
- :::image type="content" source="./media/quickstart-create-data-factory-portal/set-properties-for-outputdataset.png" alt-text="Set properties for OutputDataset.":::
-
-## Create a pipeline
-In this procedure, you create and validate a pipeline with a copy activity that uses the input and output datasets. The copy activity copies data from the file you specified in the input dataset settings to the file you specified in the output dataset settings. If the input dataset specifies only a folder (not the file name), the copy activity copies all the files in the source folder to the destination.
-
-1. Select the **+** (plus) button, and then select **Pipeline**.
-
-1. In the General panel under **Properties**, specify **CopyPipeline** for **Name**. Then collapse the panel by clicking the Properties icon in the top-right corner.
-
-1. In the **Activities** toolbox, expand **Move & Transform**. Drag the **Copy Data** activity from the **Activities** toolbox to the pipeline designer surface. You can also search for activities in the **Activities** toolbox. Specify **CopyFromBlobToBlob** for **Name**.
-
- :::image type="content" source="./media/quickstart-create-data-factory-portal/copy-activity.png" alt-text="Creating a copy data activity.":::
-
-1. Switch to the **Source** tab in the copy activity settings, and select **InputDataset** for **Source Dataset**.
-
-1. Switch to the **Sink** tab in the copy activity settings, and select **OutputDataset** for **Sink Dataset**.
-
-1. Click **Validate** on the pipeline toolbar above the canvas to validate the pipeline settings. Confirm that the pipeline has been successfully validated. To close the validation output, select the Validation button in the top-right corner.
-
- :::image type="content" source="./media/quickstart-create-data-factory-portal/pipeline-validate.png" alt-text="Validate a pipeline.":::
-
-## Debug the pipeline
-In this step, you debug the pipeline before deploying it to Data Factory.
-
-1. On the pipeline toolbar above the canvas, click **Debug** to trigger a test run.
-
-1. Confirm that you see the status of the pipeline run on the **Output** tab of the pipeline settings at the bottom.
-
- :::image type="content" source="./media/quickstart-create-data-factory-portal/pipeline-output.png" alt-text="Pipeline run output":::
-
-1. Confirm that you see an output file in the **output** folder of the **adftutorial** container. If the output folder doesn't exist, the Data Factory service automatically creates it.
-
-## Trigger the pipeline manually
-In this procedure, you deploy entities (linked services, datasets, pipelines) to Azure Data Factory. Then, you manually trigger a pipeline run.
-
-1. Before you trigger a pipeline, you must publish entities to Data Factory. To publish, select **Publish all** on the top.
-
- :::image type="content" source="./media/quickstart-create-data-factory-portal/publish-all.png" alt-text="Publish all.":::
-
-1. To trigger the pipeline manually, select **Add Trigger** on the pipeline toolbar, and then select **Trigger Now**. On the **Pipeline run** page, select **OK**.
-
-## Monitor the pipeline
-
-1. Switch to the **Monitor** tab on the left. Use the **Refresh** button to refresh the list.
-
- :::image type="content" source="./media/quickstart-create-data-factory-portal/monitor-trigger-now-pipeline.png" alt-text="Tab for monitoring pipeline runs":::
-1. Select the **CopyPipeline** link, you'll see the status of the copy activity run on this page.
-
-1. To view details about the copy operation, select the **Details** (eyeglasses image) link. For details about the properties, see [Copy Activity overview](copy-activity-overview.md).
-
- :::image type="content" source="./media/quickstart-create-data-factory-portal/copy-operation-details.png" alt-text="Copy operation details.":::
-1. Confirm that you see a new file in the **output** folder.
-1. You can switch back to the **Pipeline runs** view from the **Activity runs** view by selecting the **All pipeline runs** link.
-
-## Trigger the pipeline on a schedule
-This procedure is optional in this tutorial. You can create a *scheduler trigger* to schedule the pipeline to run periodically (hourly, daily, and so on). In this procedure, you create a trigger to run every minute until the end date and time that you specify.
-
-1. Switch to the **Author** tab.
-
-1. Go to your pipeline, select **Add Trigger** on the pipeline toolbar, and then select **New/Edit**.
-
-1. On the **Add Triggers** page, select **Choose trigger**, and then select **New**.
-
-1. On the **New Trigger** page, under **End**, select **On Date**, specify an end time a few minutes after the current time, and then select **OK**.
-
- A cost is associated with each pipeline run, so specify the end time only minutes apart from the start time. Ensure that it's the same day. However, ensure that there's enough time for the pipeline to run between the publish time and the end time. The trigger comes into effect only after you publish the solution to Data Factory, not when you save the trigger in the UI.
-
-1. On the **New Trigger** page, select the **Activated** check box, and then select **OK**.
-
- :::image type="content" source="./media/quickstart-create-data-factory-portal/trigger-settings-next.png" alt-text="New Trigger setting.":::
-1. Review the warning message, and select **OK**.
-
-1. Select **Publish all** to publish changes to Data Factory.
-
-1. Switch to the **Monitor** tab on the left. Select **Refresh** to refresh the list. You see that the pipeline runs once every minute from the publish time to the end time.
-
- Notice the values in the **TRIGGERED BY** column. The manual trigger run was from the step (**Trigger Now**) that you did earlier.
-
-1. Switch to the **Trigger runs** view.
-
-1. Confirm that an output file is created for every pipeline run until the specified end date and time in the **output** folder.
-
-## Next steps
-The pipeline in this sample copies data from one location to another location in Azure Blob storage. To learn about using Data Factory in more scenarios, go through the [tutorials](tutorial-copy-data-portal.md).
data-factory Quickstart Create Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory.md
+
+ Title: Create an Azure Data Factory
+description: Learn how to create a data factory using UI from the Azure portal.
++++ Last updated : 07/09/2022+++
+# Quickstart: Create a data factory by using the Azure portal
++
+This quickstart describes how to use either the [Azure Data Factory Studio](https://adf.azure.com) or the [Azure portal UI](https://portal.azure.com) to create a data factory.
+
+> [!NOTE]
+> If you are new to Azure Data Factory, see [Introduction to Azure Data Factory](introduction.md) before trying this quickstart.
+
+## Prerequisites
+
+### Azure subscription
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+
+### Azure roles
+
+To learn about the Azure role requirements to create a data factory, refer to [Azure Roles requirements](quickstart-create-data-factory-dot-net.md?#azure-roles).
+
+## Create a data factory
+
+A simple creation experience provided in the Azure Data Factory Studio to enable users to create a data factory within seconds. More advanced creation options are available in Azure portal.
+
+### Simple creation in the Azure Data Factory Studio
+
+1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, Data Factory UI is supported only in Microsoft Edge and Google Chrome web browsers.
+1. Go to the [Azure Data Factory Studio](https://adf.azure.com) and choose the **Create a new data factory** radio button.
+1. You can use the default values to create directly, or enter a unique name and choose a preferred location and subscription to use when creating the new data factory.
+
+ :::image type="content" source="media/quickstart-create-data-factory/create-with-azure-data-factory-studio.png" alt-text="Shows a screenshot of the Azure Data Factory Studio page to create a new data factory.":::
+
+1. After creation, you can directly enter the homepage of the Azure Data Factory Studio.
+
+ :::image type="content" source="media/quickstart-create-data-factory/azure-data-factory-studio-home-page.png" alt-text="Shows a screenshot of the Azure Data Factory Studio home page.":::
+
+### Advanced creation in the Azure portal
+
+1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, Data Factory UI is supported only in Microsoft Edge and Google Chrome web browsers.
+1. Go to the [Azure portal data factories page](https://portal.azure.com).
+1. After landing on the data factories page of the Azure portal, click **Create**.
+
+ :::image type="content" source="media/quickstart-create-data-factory/data-factory-create-from-portal.png" alt-text="Shows a screenshot of the Azure portal data factories Create button.":::
+
+1. For **Resource Group**, take one of the following steps:
+ 1. Select an existing resource group from the drop-down list.
+ 1. Select **Create new**, and enter the name of a new resource group.
+
+ To learn about resource groups, see [Use resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
+
+1. For **Region**, select the location for the data factory.
+
+ The list shows only locations that Data Factory supports, and where your Azure Data Factory meta data will be stored. The associated data stores (like Azure Storage and Azure SQL Database) and computes (like Azure HDInsight) that Data Factory uses can run in other regions.
+
+1. For **Name**, enter **ADFTutorialDataFactory**.
+
+ The name of the Azure data factory must be *globally unique*. If you see the following error, change the name of the data factory (for example, **&lt;yourname&gt;ADFTutorialDataFactory**) and try creating again. For naming rules for Data Factory artifacts, see the [Data Factory - naming rules](naming-rules.md) article.
+
+ :::image type="content" source="./media/doc-common-process/name-not-available-error.png" alt-text="New data factory error message for duplicate name.":::
+
+1. For **Version**, select **V2**.
+
+1. Select **Next: Git configuration**, and then select **Configure Git later** check box.
+
+1. Select **Review + create**, and select **Create** after the validation is passed. After the creation is complete, select **Go to resource** to navigate to the **Data Factory** page.
+
+1. Select **Open** on the **Open Azure Data Factory Studio** tile to start the Azure Data Factory user interface (UI) application on a separate browser tab.
+
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile highlighted.":::
+
+ > [!NOTE]
+ > If you see that the web browser is stuck at "Authorizing", clear the **Block third-party cookies and site data** check box. Or keep it selected, create an exception for **login.microsoftonline.com**, and then try to open the app again.
+
+## Next steps
+Learn how to use Azure Data Factory to copy data from one location to another with the [Hello World - How to copy data](tutorial-copy-data-portal.md) tutorial.
+Lean how to create a data flow with Azure Data Factory[data-flow-create.md].
data-factory Quickstart Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-get-started.md
+
+ Title: Get started to try out your first data factory pipeline
+description: Get started with your first data factory demo to copy data from one blob storage to another.
+++
+ms.devlang: bicep
+ Last updated : 08/08/2022++++
+# Quickstart: Get started with Azure Data Factory
+
+> [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"]
+> * [Version 1](v1/data-factory-copy-data-from-azure-blob-storage-to-sql-database.md)
+> * [Current version](quickstart-create-data-factory-rest-api.md)
++
+Welcome to Azure Data Factory! This getting started article will let you create your first data factory and pipeline within 5 minutes. The ARM template below will create and configure everything you need to try it out. Then you only need to navigate to your demo data factory and make one more click to trigger the pipeline, which moves some sample data from one Azure blob storage to another.
+
+## Prerequisites
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+
+## Video introduction
+
+Select the button below to try it out!
+
+[![Try your first data factory demo](./media/quickstart-get-started/try-it-now.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.datafactory%2Fdata-factory-get-started%2Fazuredeploy.json)
++
+> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE583aX]
++
+## Try your first demo with one click
+In your first demo scenario you will use the [Copy activity](copy-activity-overview.md) in a data factory to copy an Azure blob named moviesDB2.csv from an input folder on an Azure Blob Storage to an output folder. In a real world scenario this copy operation could be between any of the many supported data sources and sinks available in the service. It could also involve transformations in the data.
+
+Try it now with one click! After clicking the button below, the following objects will be created in Azure:
+- A data factory account
+- A pipeline within the data factory with one copy activity
+- An Azure blob storage with [moviesDB2.csv](https://raw.githubusercontent.com/kromerm/adfdataflowdocs/master/sampledata/moviesDB2.csv) uploaded into an input folder as source
+- A linked service to connect the data factory to the Azure blob storage
+
+## Step 1: Click the button to start
+
+Select the button below to try it out! (If you clicked the one above already, you don't need to do it again.)
+
+[![Try your first data factory demo](./media/quickstart-get-started/try-it-now.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.datafactory%2Fdata-factory-get-started%2Fazuredeploy.json)
+
+You will be redirected to the configuration page shown in the image below to deploy the template. Here, you only need to create a **new resource group**. (You can leave all the other values with their defaults.) Then click **Review + create** and click **Create** to deploy the resources.
+
+All of the resources referenced above will be created in the new resource group, so you can easily clean them up after trying the demo.
++
+## Step 2: Review deployed resources
+
+1. Select **Go to resource group** after your deployment is complete.
+ :::image type="content" source="media/quickstart-get-started/deployment-complete.png" alt-text="A screenshot of the deployment complete page in the Azure portal after successfully deploying the template.":::
+
+1. In the resource group, you will see the new data factory, Azure blob storage account, and managed identity that were created by the deployment.
+ :::image type="content" source="media/quickstart-get-started/resource-group-contents.png" alt-text="A screenshot of the contents of the resource group created for the demo.":::
+
+1. Select the data factory in the resource group to view it. Then select the **Open Azure Data Factory Studio** button to continue.
+ :::image type="content" source="media/quickstart-get-started/open-data-factory-studio.png" alt-text="A screenshot of the Azure portal on the newly created data factory page, highlighting the location of the Open Azure Data Factory Studio button.":::
+
+1. Select on the **Author** tab <img src="media/quickstart-get-started/author-button.png" alt="Author tab"/> and then the **Pipeline** created by the template. Then check the source data by selecting **Open**.
+
+ :::image type="content" source="media/quickstart-get-started/view-pipeline.png" alt-text="Screenshot of the Azure Data Factory Studio showing the pipeline created by the template.":::
+
+1. In the source dataset that you will see, select **Browse**, and note the moviesDB2.csv file, which has been uploaded into the input folder already.
+
+ :::image type="content" source="media/quickstart-get-started/source-dataset-browse.png" alt-text="Screenshot of the source dataset highlighting the Browse button where the user can see the input file created for the demo.":::
+
+ :::image type="content" source="media/quickstart-get-started/input-contents.png" alt-text="Screenshot of the contents of the input folder showing the moviesDB2.csv file used in the demo.":::
+
+## Step 3: Trigger the demo pipeline to run
+
+1. Select **Add Trigger**, and then **Trigger Now**.
+ :::image type="content" source="media/quickstart-get-started/trigger-now.png" alt-text="Screenshot of the Trigger Now button for the pipeline in the demo.":::
+1. In the right pane under **Pipeline run**, select **OK**.
+
+## Monitor the pipeline
+
+1. Select the **Monitor** tab <img src="media/quickstart-get-started/monitor-button.png" alt="Monitor tab"/>.
+1. You can see an overview of your pipeline runs in the Monitor tab, such as run start time, status, etc.
+
+ :::image type="content" source="media/quickstart-get-started/monitor-overview.png" alt-text="Screenshot of the data factory monitoring tab.":::
+
+1. In this quickstart, the pipeline has only one activity type: Copy. Click on the pipeline name and you can see the details of the copy activity's run results.
+
+ :::image type="content" source="media/quickstart-get-started/copy-activity-run-results.png" alt-text="Screenshot of the run results of a copy activity in the data factorying monitoring tab.":::
+
+1. Click on details, and the detailed copy process is displayed. From the results, data read and written size are the same, and 1 file was read and written, which also proves all the data has been successfully copied to the destination.
+
+ :::image type="content" source="media/quickstart-get-started/copy-activity-detailed-run-results.png" alt-text="Screenshot of the detailed copy activity run results.":::
+
+## Clean up resources
+
+You can clean up all the resources you created in this quickstart in either of two ways. You can [delete the entire Azure resource group](../azure-resource-manager/management/delete-resource-group.md), which includes all the resources created in it. Or if you want to keep some resources intact, browse to the resource group and delete only the specific resources you want, keeping the others. For example, if you are using this template to create a data factory for use in another tutorial, you can delete the other resources but keep only the data factory.
+
+## Next steps
+
+In this quickstart, you created an Azure Data Factory containing a pipeline with a copy activity. To learn more about Azure Data Factory, continue on to the article and Learn module below.
+
+- [Hello World - How to copy data](quickstart-hello-world-copy-data-tool.md)
+- [Learn module: Introduction to Azure Data Factory](/learn/modules/intro-to-azure-data-factory/)
data-factory Quickstart Hello World Copy Data Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-hello-world-copy-data-tool.md
+
+ Title: Copy data by using the copy data tool
+description: Create an Azure Data Factory and then use the copy data tool to copy data from one location in Azure Blob storage to another location.
+++++ Last updated : 07/05/2021+++
+# Quickstart: Use the copy data tool in the Azure Data Factory Studio to copy data
++
+In this quick start, you will use the Copy Data tool to create a pipeline that copies data from the source folder in Azure Blob storage to target folder.
+
+## Prerequisites
+
+### Azure subscription
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+
+### Prepare source data in Azure Blob Storage
+Select the button below to try it out!
+
+[![Try your first data factory demo](./media/quickstart-get-started/try-it-now.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.datafactory%2Fdata-factory-copy-data-tool%2Fazuredeploy.json)
+
+You will be redirected to the configuration page shown in the image below to deploy the template. Here, you only need to create a **new resource group**. (You can leave all the other values with their defaults.) Then click **Review + create** and click **Create** to deploy the resources.
+
+> [!NOTE]
+> The user deploying the template needs to assign a role to a managed identity. This requires permissions that can be granted through the Owner, User Access Administrator or Managed Identity Operator roles.
+
+A new blob storage account will be created in the new resource group, and the moviesDB2.csv file will be stored in a folder called **input** in the blob storage.
++
+### Create a data factory
+
+You can use your existing data factory or create a new one as described in [Quickstart: Create a data factory by using the Azure portal](quickstart-create-data-factory.md).
+
+## Use the copy data tool to copy data
+
+The steps below will walk you through how to easily copy data with the copy data tool in Azure Data Factory.
+
+### Step 1: Start the copy data Tool
+
+1. On the home page of Azure Data Factory, select the **Ingest** tile to start the Copy Data tool.
+
+ :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the Azure Data Factory home page.":::
+
+1. On the **Properties** page of the Copy Data tool, choose **Built-in copy task** under **Task type**, then select **Next**.
+
+ :::image type="content" source="./media/quickstart-hello-world-copy-data-tool/copy-data-tool-properties-page.png" alt-text="Screenshot that shows the Properties page.":::
+
+### Step 2: Complete source configuration
+
+1. Click **+ Create new connection** to add a connection.
+
+1. Select the linked service type that you want to create for the source connection. In this tutorial, we use **Azure Blob Storage**. Select it from the gallery, and then select **Continue**.
+
+ :::image type="content" source="./media/quickstart-hello-world-copy-data-tool/select-blob-source.png" alt-text="Screenshot that shows the Select Blob dialog.":::
+
+1. On the **New connection (Azure Blob Storage)** page, specify a name for your connection. Select your Azure subscription from the **Azure subscription** list and your storage account from the **Storage account name** list, test connection, and then select **Create**.
+
+ :::image type="content" source="./media/quickstart-hello-world-copy-data-tool/configure-blob-storage.png" alt-text="Screenshot that shows where to configure the Azure Blob storage account.":::
+
+1. Select the newly created connection in the **Connection** block.
+1. In the **File or folder** section, select **Browse** to navigate to the **adftutorial/input** folder, select the **emp.txt** file, and then click **OK**.
+1. Select the **Binary copy** checkbox to copy file as-is, and then select **Next**.
+
+ :::image type="content" source="./media/quickstart-hello-world-copy-data-tool/source-data-store.png" alt-text="Screenshot that shows the Source data store page.":::
+
+### Step 3: Complete destination configuration
+1. Select the **AzureBlobStorage** connection that you created in the **Connection** block.
+
+1. In the **Folder path** section, enter **adftutorial/output** for the folder path.
+
+ :::image type="content" source="./media/quickstart-hello-world-copy-data-tool/destination-data-store.png" alt-text="Screenshot that shows the Destination data store page.":::
+
+1. Leave other settings as default and then select **Next**.
+
+### Step 4: Review all settings and deployment
+
+1. On the **Settings** page, specify a name for the pipeline and its description, then select **Next** to use other default configurations.
+
+ :::image type="content" source="./media/quickstart-hello-world-copy-data-tool/settings.png" alt-text="Screenshot that shows the settings page.":::
+
+1. On the **Summary** page, review all settings, and select **Next**.
+
+1. On the **Deployment complete** page, select **Monitor** to monitor the pipeline that you created.
+
+ :::image type="content" source="./media/quickstart-hello-world-copy-data-tool/deployment-page.png" alt-text="Screenshot that shows the Deployment complete page.":::
+
+### Step 5: Monitor the running results
+1. The application switches to the **Monitor** tab. You see the status of the pipeline on this tab. Select **Refresh** to refresh the list. Click the link under **Pipeline name** to view activity run details or rerun the pipeline.
+
+ :::image type="content" source="./media/quickstart-hello-world-copy-data-tool/refresh-pipeline.png" alt-text="Screenshot that shows the refresh pipeline button.":::
+
+1. On the Activity runs page, select the **Details** link (eyeglasses icon) under the **Activity name** column for more details about copy operation. For details about the properties, see [Copy Activity overview](copy-activity-overview.md).
+
+## Next steps
+The pipeline in this sample copies data from one location to another location in Azure Blob storage. To learn about using Data Factory in more scenarios, go through the [tutorials](tutorial-copy-data-portal.md).
data-factory Quickstart Learn Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-learn-modules.md
+
+ Title: A summary of introductory training modules
+description: Get started learning about Azure Data Factory fast with our introductory training modules.
++++ Last updated : 09/14/2022+++
+# Introductory training modules for Azure Data Factory
+
+Azure Data Factory provides a number of training modules to help you master the basics, as well as more in-depth modules to give you mastery over some of the deeper capabilities of the service. Below we provide links and descriptions of our introductory modules to get you started fast! You can also [view the entire catalog](/training/browse/?filter-products=factory&products=azure-data-factory) of available data factory training, including more advanced modules.
+
+## Introduction to Azure Data Factory
+
+This is the best place to start if you are completely new to the product.
+[Visit this module now.](/training/modules/intro-to-azure-data-factory)
++
+## Integrate data with Azure Data Factory or Azure Synapse Pipeline
+
+This module helps you master the basic functionality of the service and learn how to use the integration runtime to access on-premises resources within your own network.
+[Visit this module now.](/training/modules/data-integration-azure-data-factory)
++
+## Petabyte scale ingestion with Azure Data Factory or Azure Synapse Pipeline
+
+This module introduces you to the details of Azure Data Factory's ingestion methods and demonstrates how you can ingest large volumes of data even at petabyte scale using the service.
+[Visit this module now.](/training/modules/petabyte-scale-ingestion-azure-data-factory)
++
+## Next steps
+
+- [Quickstart: Get started with Azure Data Factory](quickstart-get-started.md)
+- [Quickstart: Create data factory using UI](quickstart-create-data-factory-portal.md)
+- [Quickstart: Copy data tool](quickstart-hello-world-copy-data-tool.md)
+- [Quickstart: Create data factory - ARM template](quickstart-create-data-factory-resource-manager-template.md)
+- [Quickstart: Create data flow](data-flow-create.md)
+- [All Learn modules for Azure Data Factory](/training/browse/?filter-products=fact&products=azure-data-factory)
data-factory Tutorial Control Flow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-control-flow-portal.md
Previously updated : 06/07/2021 Last updated : 10/04/2022 # Branching and chaining activities in an Azure Data Factory pipeline using the Azure portal
To trigger sending an email from the pipeline, you use [Logic Apps](../logic-app
### Success email workflow Create a Logic App workflow named `CopySuccessEmail`. Define the workflow trigger as `When an HTTP request is received`, and add an action of `Office 365 Outlook ΓÇô Send an email`. For your request trigger, fill in the `Request Body JSON Schema` with the following JSON:
For your request trigger, fill in the `Request Body JSON Schema` with the follow
The Request in the Logic App Designer should look like the following image: For the **Send Email** action, customize how you wish to format the email, utilizing the properties passed in the request Body JSON schema. Here is an example: Save the workflow. Make a note of your HTTP Post request URL for your success email workflow:
https://prodxxx.eastus.logic.azure.com:443/workflows/000000/triggers/manual/path
### Fail email workflow Follow the same steps to create another Logic Apps workflow of **CopyFailEmail**. In the request trigger, the `Request Body JSON schema` is the same. Change the format of your email like the `Subject` to tailor toward a failure email. Here is an example: Save the workflow. Make a note of your HTTP Post request URL for your failure email workflow:
https://prodxxx.eastus.logic.azure.com:443/workflows/000000/triggers/manual/path
## Create a data factory 1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, Data Factory UI is supported only in Microsoft Edge and Google Chrome web browsers.
-1. On the left menu, select **Create a resource** > **Data + Analytics** > **Data Factory**:
+1. Expand the menu at the top left and select **Create a resource**. Then select > **Integration** > **Data Factory**:
- :::image type="content" source="./media/quickstart-create-data-factory-portal/new-azure-data-factory-menu.png" alt-text="Data Factory selection in the &quot;New&quot; pane":::
+ :::image type="content" source="./media/tutorial-control-flow-portal/create-resource.png" alt-text="Shows a screenshot of the &quot;Create a resource&quot; button in the Azure portal.":::
-2. In the **New data factory** page, enter **ADFTutorialDataFactory** for the **name**.
+ :::image type="content" source="./media/tutorial-control-flow-portal/new-azure-data-factory-menu.png" alt-text="Shows a screenshot of the Data Factory selection in the &quot;New&quot; pane.":::
+
+1. In the **New data factory** page, enter **ADFTutorialDataFactory** for the **name**.
:::image type="content" source="./media/tutorial-control-flow-portal/new-azure-data-factory.png" alt-text="New data factory page":::
https://prodxxx.eastus.logic.azure.com:443/workflows/000000/triggers/manual/path
*Data factory name ΓÇ£ADFTutorialDataFactoryΓÇ¥ is not available.*
-3. Select your Azure **subscription** in which you want to create the data factory.
-4. For the **Resource Group**, do one of the following steps:
+1. Select your Azure **subscription** in which you want to create the data factory.
+1. For the **Resource Group**, do one of the following steps:
- Select **Use existing**, and select an existing resource group from the drop-down list. - Select **Create new**, and enter the name of a resource group. To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
-4. Select **V2** for the **version**.
-5. Select the **location** for the data factory. Only locations that are supported are displayed in the drop-down list. The data stores (Azure Storage, Azure SQL Database, etc.) and computes (HDInsight, etc.) used by data factory can be in other regions.
-6. Select **Pin to dashboard**.
-7. Click **Create**.
-8. On the dashboard, you see the following tile with status: **Deploying data factory**.
-
- :::image type="content" source="media/tutorial-control-flow-portal/deploying-data-factory.png" alt-text="deploying data factory tile":::
-9. After the creation is complete, you see the **Data Factory** page as shown in the image.
+1. Select **V2** for the **version**.
+1. Select the **location** for the data factory. Only locations that are supported are displayed in the drop-down list. The data stores (Azure Storage, Azure SQL Database, etc.) and computes (HDInsight, etc.) used by data factory can be in other regions.
+1. Select **Pin to dashboard**.
+1. Click **Create**.
+1. After the creation is complete, you see the **Data Factory** page as shown in the image.
- :::image type="content" source="./media/tutorial-control-flow-portal/data-factory-home-page.png" alt-text="Data factory home page":::
-10. Click **Author & Monitor** tile to launch the Azure Data Factory user interface (UI) in a separate tab.
+ :::image type="content" source="./media/tutorial-control-flow-portal/data-factory-home-page.png" alt-text="Shows a screenshot of the data factory home page.":::
+1. Click **Open Azure Data Factory Studio** tile to launch the Azure Data Factory user interface (UI) in a separate tab.
## Create a pipeline
In this step, you create a pipeline with one Copy activity and two Web activitie
1. In the home page of Data Factory UI, click the **Orchestrate** tile.
- :::image type="content" source="./media/doc-common-process/get-started-page.png" alt-text="Screenshot that shows the ADF home page.":::
-3. In the properties window for the pipeline, switch to the **Parameters** tab, and use the **New** button to add the following three parameters of type String: sourceBlobContainer, sinkBlobContainer, and receiver.
+ :::image type="content" source="media/tutorial-data-flow/orchestrate.png" alt-text="Shows a screenshot of the data factory home page with the Orchestrate tile highlighted.":::
+
+1. In the properties window for the pipeline, switch to the **Parameters** tab, and use the **New** button to add the following three parameters of type String: sourceBlobContainer, sinkBlobContainer, and receiver.
- **sourceBlobContainer** - parameter in the pipeline consumed by the source blob dataset. - **sinkBlobContainer** ΓÇô parameter in the pipeline consumed by the sink blob dataset - **receiver** ΓÇô this parameter is used by the two Web activities in the pipeline that send success or failure emails to the receiver whose email address is specified by this parameter.
- :::image type="content" source="./media/tutorial-control-flow-portal/pipeline-parameters.png" alt-text="New pipeline menu":::
-4. In the **Activities** toolbox, expand **Data Flow**, and drag-drop **Copy** activity to the pipeline designer surface.
+ :::image type="content" source="./media/tutorial-control-flow-portal/pipeline-parameters.png" alt-text="Shows a screenshot of the New pipeline menu.":::
+1. In the **Activities** toolbox, search for **Copy** and drag-drop the **Copy** activity to the pipeline designer surface.
+
+ :::image type="content" source="./media/tutorial-control-flow-portal/drag-drop-copy-activity.png" alt-text="Shows a screenshot demonstrating how to drag and drop the copy activity onto the pipeline designer.":::
+1. Select the **Copy** activity you dragged onto the pipeline designer surface. In the **Properties** window for the **Copy** activity at the bottom, switch to the **Source** tab, and click **+ New**. You create a source dataset for the copy activity in this step.
+
+ :::image type="content" source="./media/tutorial-control-flow-portal/new-source-dataset-button.png" alt-text="Screenshot that shows how to create a source dataset for the copy activity.":::
+1. In the **New Dataset** window, select the **Azure** tab at the top, and then choose **Azure Blob Storage**, and select **Continue**.
+
+ :::image type="content" source="./media/tutorial-control-flow-portal/select-azure-blob-storage.png" alt-text="Shows a screenshot of the select Azure Blob Storage button.":::
+
+1. In the **Select format** window, choose **DelimitedText** and select **Continue**.
- :::image type="content" source="./media/tutorial-control-flow-portal/drag-drop-copy-activity.png" alt-text="Drag-drop copy activity":::
-5. In the **Properties** window for the **Copy** activity at the bottom, switch to the **Source** tab, and click **+ New**. You create a source dataset for the copy activity in this step.
+ :::image type="content" source="media/tutorial-control-flow-portal/select-format.png" alt-text="Shows a screenshot of the &quot;Select Format&quot; window with the DelimitedText format highlighted.":::
- :::image type="content" source="./media/tutorial-control-flow-portal/new-source-dataset-button.png" alt-text="Screenshot that shows how to create a source dataset for teh copy activity.":::
-6. In the **New Dataset** window, select **Azure Blob Storage**, and click **Finish**.
+1. You see a new **tab** titled **Set properties**. Change the name of the dataset to **SourceBlobDataset**. Select the **Linked Service** dropdown, and choose **+New** to create a new linked service to your source dataset.
- :::image type="content" source="./media/tutorial-control-flow-portal/select-azure-blob-storage.png" alt-text="Select Azure Blob Storage":::
-7. You see a new **tab** titled **AzureBlob1**. Change the name of the dataset to **SourceBlobDataset**.
+ :::image type="content" source="./media/tutorial-control-flow-portal/create-new-linked-service.png" alt-text="Shows a screenshot of the &quot;Set properties&quot; window for the dataset, with the &quot;+New&quot; button highlighted under the &quot;Linked service&quot; dropdown.**":::
- :::image type="content" source="./media/tutorial-control-flow-portal/dataset-general-page.png" alt-text="Dataset general settings":::
-8. Switch to the **Connection** tab in the **Properties** window, and click New for the **Linked service**. You create a linked service to link your Azure Storage account to the data factory in this step.
+1. You will see the **New linked service** window where you can fill out the required properties for the linked service.
- :::image type="content" source="./media/tutorial-control-flow-portal/dataset-connection-new-button.png" alt-text="Dataset connection - new linked service":::
-9. In the **New Linked Service** window, do the following steps:
+ :::image type="content" source="./media/tutorial-control-flow-portal/new-linked-service-window.png" alt-text="Shows a screenshot fo the dataset connection window with the new linked service button highlighted.":::
+
+1. In the **New Linked Service** window, complete the following steps:
1. Enter **AzureStorageLinkedService** for **Name**.
- 2. Select your Azure storage account for the **Storage account name**.
- 3. Click **Save**.
+ 1. Select your Azure storage account for the **Storage account name**.
+ 1. Click **Create**.
+
+1. On the **Set properties** window that appears next, select **Open this dataset** to enter a parameterized value for the file name.
+
+ :::image type="content" source="media/tutorial-control-flow-portal/dataset-set-properties.png" alt-text="Shows a screenshot of the dataset &quot;Set properties&quot; window with the &quot;Open this dataset&quot; link highlighted.":::
- :::image type="content" source="./media/tutorial-control-flow-portal/new-azure-storage-linked-service.png" alt-text="New Azure Storage linked service":::
-12. Enter `@pipeline().parameters.sourceBlobContainer` for the folder and `emp.txt` for the file name. You use the sourceBlobContainer pipeline parameter to set the folder path for the dataset.
+1. Enter `@pipeline().parameters.sourceBlobContainer` for the folder and `emp.txt` for the file name.
- :::image type="content" source="./media/tutorial-control-flow-portal/source-dataset-settings.png" alt-text="Source dataset settings":::
+ :::image type="content" source="./media/tutorial-control-flow-portal/source-dataset-settings.png" alt-text="Shows a screenshot of the source dataset settings.":::
-13. Switch to the **pipeline** tab (or) click the pipeline in the treeview. Confirm that **SourceBlobDataset** is selected for **Source Dataset**.
+1. Switch back to the **pipeline** tab (or click the pipeline in the treeview on the left), and select the **Copy** activity on the designer. Confirm that your new dataset is selected for **Source Dataset**.
- :::image type="content" source="./media/tutorial-control-flow-portal/pipeline-source-dataset-selected.png" alt-text="Source dataset":::
+ :::image type="content" source="./media/tutorial-control-flow-portal/pipeline-source-dataset-selected.png" alt-text="Shows a screenshot of the source dataset.":::
-13. In the properties window, switch to the **Sink** tab, and click **+ New** for **Sink Dataset**. You create a sink dataset for the copy activity in this step similar to the way you created the source dataset.
+1. In the properties window, switch to the **Sink** tab, and click **+ New** for **Sink Dataset**. You create a sink dataset for the copy activity in this step similar to the way you created the source dataset.
- :::image type="content" source="./media/tutorial-control-flow-portal/new-sink-dataset-button.png" alt-text="New sink dataset button":::
-14. In the **New Dataset** window, select **Azure Blob Storage**, and click **Finish**.
-15. In the **General** settings page for the dataset, enter **SinkBlobDataset** for **Name**.
-16. Switch to the **Connection** tab, and do the following steps:
+ :::image type="content" source="./media/tutorial-control-flow-portal/new-sink-dataset-button.png" alt-text="Shows a screenshot of the new sink dataset button":::
+1. In the **New Dataset** window, select **Azure Blob Storage**, and click **Continue**, and then select **DelimitedText** again on the **Select format** window and click **Continue** again.
- 1. Select **AzureStorageLinkedService** for **LinkedService**.
- 2. Enter `@pipeline().parameters.sinkBlobContainer` for the folder.
- 1. Enter `@CONCAT(pipeline().RunId, '.txt')` for the file name. The expression uses the ID of the current pipeline run for the file name. For the supported list of system variables and expressions, see [System variables](control-flow-system-variables.md) and [Expression language](control-flow-expression-language-functions.md).
+1. In the **Set properties** page for the dataset, enter **SinkBlobDataset** for **Name**, and select **AzureStorageLinkedService** for **LinkedService**.
+1. Expand the Advanced section of the properties page and select **Open this dataset**.
+
+1. On the dataset **Connection** tab, edit the **File path**. Enter `@pipeline().parameters.sinkBlobContainer` for the folder, and `@concat(pipeline().RunId, '.txt')` for the file name. The expression uses the ID of the current pipeline run for the file name. For the supported list of system variables and expressions, see [System variables](control-flow-system-variables.md) and [Expression language](control-flow-expression-language-functions.md).
- :::image type="content" source="./media/tutorial-control-flow-portal/sink-dataset-settings.png" alt-text="Sink dataset settings":::
-17. Switch to the **pipeline** tab at the top. Expand **General** in the **Activities** toolbox, and drag-drop a **Web** activity to the pipeline designer surface. Set the name of the activity to **SendSuccessEmailActivity**. The Web Activity allows a call to any REST endpoint. For more information about the activity, see [Web Activity](control-flow-web-activity.md). This pipeline uses a Web Activity to call the Logic Apps email workflow.
+ :::image type="content" source="./media/tutorial-control-flow-portal/sink-dataset-settings.png" alt-text="Shows a screenshot of the Sink dataset settings.":::
- :::image type="content" source="./media/tutorial-control-flow-portal/success-web-activity-general.png" alt-text="Drag-drop first Web activity":::
-18. Switch to the **Settings** tab from the **General** tab, and do the following steps:
+1. Switch back to the **pipeline** tab at the top. Search for **Web** in the search box, and drag-drop a **Web** activity to the pipeline designer surface. Set the name of the activity to **SendSuccessEmailActivity**. The Web Activity allows a call to any REST endpoint. For more information about the activity, see [Web Activity](control-flow-web-activity.md). This pipeline uses a Web Activity to call the Logic Apps email workflow.
+
+ :::image type="content" source="./media/tutorial-control-flow-portal/success-web-activity-general.png" alt-text="Shows a screenshot demonstrating how to drag and drop the first Web activity.":::
+1. Switch to the **Settings** tab from the **General** tab, and do the following steps:
1. For **URL**, specify URL for the logic apps workflow that sends the success email.
- 2. Select **POST** for **Method**.
- 3. Click **+ Add header** link in the **Headers** section.
- 4. Add a header **Content-Type** and set it to **application/json**.
- 5. Specify the following JSON for **Body**.
+ 1. Select **POST** for **Method**.
+ 1. Click **+ Add header** link in the **Headers** section.
+ 1. Add a header **Content-Type** and set it to **application/json**.
+ 1. Specify the following JSON for **Body**.
```json {
In this step, you create a pipeline with one Copy activity and two Web activitie
- Pipeline Name ΓÇô Passing value of `@{pipeline().Pipeline}`. This is also a system variable, allowing you to access the corresponding pipeline name. - Receiver ΓÇô Passing value of "\@pipeline().parameters.receiver"). Accessing the pipeline parameters.
- :::image type="content" source="./media/tutorial-control-flow-portal/web-activity1-settings.png" alt-text="Settings for the first Web activity":::
-19. Connect the **Copy** activity to the **Web** activity by dragging the green button next to the Copy activity and dropping on the Web activity.
+ :::image type="content" source="./media/tutorial-control-flow-portal/web-activity1-settings.png" alt-text="Shows a screenshot of the settings for the first Web activity.":::
+1. Connect the **Copy** activity to the **Web** activity by dragging the green checkbox button next to the Copy activity and dropping on the Web activity.
- :::image type="content" source="./media/tutorial-control-flow-portal/connect-copy-web-activity1.png" alt-text="Connect Copy activity with the first Web activity":::
-20. Drag-drop another **Web** activity from the Activities toolbox to the pipeline designer surface, and set the **name** to **SendFailureEmailActivity**.
+ :::image type="content" source="./media/tutorial-control-flow-portal/connect-copy-web-activity1.png" alt-text="Shows a screenshot demonstrating how to connect the Copy activity with the first Web activity.":::
+1. Drag-drop another **Web** activity from the Activities toolbox to the pipeline designer surface, and set the **name** to **SendFailureEmailActivity**.
- :::image type="content" source="./media/tutorial-control-flow-portal/web-activity2-name.png" alt-text="Name of the second Web activity":::
-21. Switch to the **Settings** tab, and do the following steps:
+ :::image type="content" source="./media/tutorial-control-flow-portal/web-activity2-name.png" alt-text="Shows a screenshot of the name of the second Web activity.":::
+1. Switch to the **Settings** tab, and do the following steps:
1. For **URL**, specify URL for the logic apps workflow that sends the failure email.
- 2. Select **POST** for **Method**.
- 3. Click **+ Add header** link in the **Headers** section.
- 4. Add a header **Content-Type** and set it to **application/json**.
- 5. Specify the following JSON for **Body**.
+ 1. Select **POST** for **Method**.
+ 1. Click **+ Add header** link in the **Headers** section.
+ 1. Add a header **Content-Type** and set it to **application/json**.
+ 1. Specify the following JSON for **Body**.
```json {
In this step, you create a pipeline with one Copy activity and two Web activitie
} ```
- :::image type="content" source="./media/tutorial-control-flow-portal/web-activity2-settings.png" alt-text="Settings for the second Web activity":::
-22. Select **Copy** activity in the pipeline designer, and click **+->** button, and select **Error**.
+ :::image type="content" source="./media/tutorial-control-flow-portal/web-activity2-settings.png" alt-text="Shows a screenshot of the settings for the second Web activity.":::
+1. Select the red **X** button on the right side of the **Copy** activity in the pipeline designer and drag and drop it onto the **SendFailureEmailActivity** you just created.
:::image type="content" source="./media/tutorial-control-flow-portal/select-copy-failure-link.png" alt-text="Screenshot that shows how to select Error on the Copy activity in the pipeline designer.":::
-23. Drag the **red** button next to the Copy activity to the second Web activity **SendFailureEmailActivity**. You can move the activities around so that the pipeline looks like in the following image:
- :::image type="content" source="./media/tutorial-control-flow-portal/full-pipeline.png" alt-text="Full pipeline with all activities":::
-24. To validate the pipeline, click **Validate** button on the toolbar. Close the **Pipeline Validation Output** window by clicking the **>>** button.
+1. To validate the pipeline, click **Validate** button on the toolbar. Close the **Pipeline Validation Output** window by clicking the **>>** button.
- :::image type="content" source="./media/tutorial-control-flow-portal/validate-pipeline.png" alt-text="Validate pipeline":::
-24. To publish the entities (datasets, pipelines, etc.) to Data Factory service, select **Publish All**. Wait until you see the **Successfully published** message.
+ :::image type="content" source="./media/tutorial-control-flow-portal/validate-pipeline.png" alt-text="Shows a screenshot of the Validate pipeline button.":::
+1. To publish the entities (datasets, pipelines, etc.) to Data Factory service, select **Publish All**. Wait until you see the **Successfully published** message.
- :::image type="content" source="./media/tutorial-control-flow-portal/publish-button.png" alt-text="Publish":::
+ :::image type="content" source="./media/tutorial-control-flow-portal/publish-button.png" alt-text="Shows a screenshot of the Publish button in the data factory portal.":::
## Trigger a pipeline run that succeeds 1. To **trigger** a pipeline run, click **Trigger** on the toolbar, and click **Trigger Now**.
- :::image type="content" source="./media/tutorial-control-flow-portal/trigger-now-menu.png" alt-text="Trigger a pipeline run":::
-2. In the **Pipeline Run** window, do the following steps:
+ :::image type="content" source="./media/tutorial-control-flow-portal/trigger-now-menu.png" alt-text="Shows a screenshot of the Trigger Now button.":::
+1. In the **Pipeline Run** window, do the following steps:
1. Enter **adftutorial/adfv2branch/input** for the **sourceBlobContainer** parameter.
- 2. Enter **adftutorial/adfv2branch/output** for the **sinkBlobContainer** parameter.
- 3. Enter an **email address** of the **receiver**.
- 4. Click **Finish**
+ 1. Enter **adftutorial/adfv2branch/output** for the **sinkBlobContainer** parameter.
+ 1. Enter an **email address** of the **receiver**.
+ 1. Click **Finish**
:::image type="content" source="./media/tutorial-control-flow-portal/pipeline-run-parameters.png" alt-text="Pipeline run parameters":::
In this step, you create a pipeline with one Copy activity and two Web activitie
1. To monitor the pipeline run, switch to the **Monitor** tab on the left. You see the pipeline run that was triggered manually by you. Use the **Refresh** button to refresh the list. :::image type="content" source="./media/tutorial-control-flow-portal/monitor-success-pipeline-run.png" alt-text="Successful pipeline run":::
-2. To **view activity runs** associated with this pipeline run, click the first link in the **Actions** column. You can switch back to the previous view by clicking **Pipelines** at the top. Use the **Refresh** button to refresh the list.
+1. To **view activity runs** associated with this pipeline run, click the first link in the **Actions** column. You can switch back to the previous view by clicking **Pipelines** at the top. Use the **Refresh** button to refresh the list.
:::image type="content" source="./media/tutorial-control-flow-portal/activity-runs-success.png" alt-text="Screenshot that shows how to view the list of activity runs."::: ## Trigger a pipeline run that fails 1. Switch to the **Edit** tab on the left.
-2. To **trigger** a pipeline run, click **Trigger** on the toolbar, and click **Trigger Now**.
-3. In the **Pipeline Run** window, do the following steps:
+1. To **trigger** a pipeline run, click **Trigger** on the toolbar, and click **Trigger Now**.
+1. In the **Pipeline Run** window, do the following steps:
1. Enter **adftutorial/dummy/input** for the **sourceBlobContainer** parameter. Ensure that the dummy folder does not exist in the adftutorial container.
- 2. Enter **adftutorial/dummy/output** for the **sinkBlobContainer** parameter.
+ 1. Enter **adftutorial/dummy/output** for the **sinkBlobContainer** parameter.
3. Enter an **email address** of the **receiver**. 4. Click **Finish**.
In this step, you create a pipeline with one Copy activity and two Web activitie
1. To monitor the pipeline run, switch to the **Monitor** tab on the left. You see the pipeline run that was triggered manually by you. Use the **Refresh** button to refresh the list. :::image type="content" source="./media/tutorial-control-flow-portal/monitor-failure-pipeline-run.png" alt-text="Failure pipeline run":::
-2. Click **Error** link for the pipeline run to see details about the error.
+1. Click **Error** link for the pipeline run to see details about the error.
:::image type="content" source="./media/tutorial-control-flow-portal/pipeline-error-message.png" alt-text="Pipeline error":::
-2. To **view activity runs** associated with this pipeline run, click the first link in the **Actions** column. Use the **Refresh** button to refresh the list. Notice that the Copy activity in the pipeline failed. The Web activity succeeded to send the failure email to the specified receiver.
+1. To **view activity runs** associated with this pipeline run, click the first link in the **Actions** column. Use the **Refresh** button to refresh the list. Notice that the Copy activity in the pipeline failed. The Web activity succeeded to send the failure email to the specified receiver.
:::image type="content" source="./media/tutorial-control-flow-portal/activity-runs-failure.png" alt-text="Activity runs":::
-4. Click **Error** link in the **Actions** column to see details about the error.
+1. Click **Error** link in the **Actions** column to see details about the error.
:::image type="content" source="./media/tutorial-control-flow-portal/activity-run-error.png" alt-text="Activity run error":::
data-factory Tutorial Incremental Copy Change Tracking Feature Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-incremental-copy-change-tracking-feature-portal.md
Install the latest Azure PowerShell modules by following instructions in [How t
1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, Data Factory UI is supported only in Microsoft Edge and Google Chrome web browsers. 1. On the left menu, select **Create a resource** > **Data + Analytics** > **Data Factory**:
- :::image type="content" source="./media/quickstart-create-data-factory-portal/new-azure-data-factory-menu.png" alt-text="Data Factory selection in the &quot;New&quot; pane":::
+ :::image type="content" source="./media/doc-common-process/new-azure-data-factory-menu.png" alt-text="Screenshot that shows the data factory selection in the New pane.":::
2. In the **New data factory** page, enter **ADFTutorialDataFactory** for the **name**.
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics
description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 10/03/2022 --
digital-twins How To Ingest Opcua Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-ingest-opcua-data.md
Title: Ingest OPC UA data with Azure Digital Twins description: Steps to get your Azure OPC UA data into Azure Digital Twins-- Previously updated : 06/21/2022++ Last updated : 10/4/2022 # Optional fields. Don't forget to remove # if you need a field.
If you already have a physical OPC UA device or another OPC UA simulation server
The Prosys Software requires a simple virtual resource. Using the [Azure portal](https://portal.azure.com), [create a Windows 10 virtual machine (VM)](../virtual-machines/windows/quick-create-portal.md) with the following specifications: * **Availability options**: No infrastructure redundancy required
-* **Image**: Windows 10 Pro, Version 2004 - Gen2
+* **Image**: Windows 10 Pro, Version 21H2 - Gen2
* **Size**: Standard_B1s - 1 vcpu, 1 GiB memory :::image type="content" source="media/how-to-ingest-opcua-data/create-windows-virtual-machine-1.png" alt-text="Screenshot of the Azure portal, showing the Basics tab of Windows virtual machine setup." lightbox="media/how-to-ingest-opcua-data/create-windows-virtual-machine-1.png":::
In this section, you set up the OPC UA Server for simulating data. Verify that y
In this section, you'll set up an IoT Hub instance and an IoT Edge device.
-First, [create an Azure IoT Hub instance](../iot-hub/iot-hub-create-through-portal.md). For this article, you can create an instance in the **F1 - Free** tier.
+First, [create an Azure IoT Hub instance](../iot-hub/iot-hub-create-through-portal.md). For this article, you can create an instance in the **F1: Free** tier (if you're using the Azure portal to create the hub, that option is on the **Management** tab of setup).
:::image type="content" source="media/how-to-ingest-opcua-data/iot-hub.png" alt-text="Screenshot of the Azure portal showing properties of an IoT Hub.":::
-After you've created the Azure IoT Hub instance, select **IoT Edge** from the instance's left navigation menu, and select **Add an IoT Edge device**.
+After you've created the Azure IoT Hub instance, select **IoT Edge** from the instance's left navigation menu, and select **Add IoT Edge Device**.
:::image type="content" source="media/how-to-ingest-opcua-data/iot-edge-1.png" alt-text="Screenshot of adding an IoT Edge device in the Azure portal."::: Follow the prompts to create a new device.
-Once your device is created, copy either the **Primary Connection String** or **Secondary Connection String** value. You'll need this value later when you set up the edge device.
+Once your device is created, copy either the **Primary Connection String** or **Secondary Connection String** value. You'll need this value in the next section when you set up the edge device.
:::image type="content" source="media/how-to-ingest-opcua-data/iot-edge-2.png" alt-text="Screenshot of the Azure portal showing IoT Edge device connection strings.":::
In this section, you set up IoT Edge and IoT Hub in preparation to create a gate
To get your OPC UA Server data into IoT Hub, you need a device that runs IoT Edge with the OPC Publisher module. OPC Publisher will then listen to OPC UA node updates and will publish the telemetry into IoT Hub in JSON format.
-#### Create Ubuntu Server virtual machine
+#### Create Ubuntu VM with IoT Edge
-Using the [Azure portal](https://portal.azure.com), create an Ubuntu Server virtual machine with the following specifications:
-* **Availability options**: No infrastructure redundancy required
-* **Image**: Ubuntu Server 18.04 LTS - Gen1
-* **Size**: Standard_B1ms - 1 vcpu, 2 GiB memory
- - The default size (Standard_b1s ΓÇô vcpu, 1GiB memory) is too slow for RDP. Updating it to the 2-GiB memory will provide a better RDP experience.
-
+Use the [ARM Template to deploy IoT Edge enabled VM](https://github.com/Azure/iotedge-vm-deploy/tree/1.4) to deploy an Ubuntu virtual machine with IoT Edge. You can use the **Deploy to Azure** button to set the template options through the Azure portal. Fill in the values for the VM, keeping in mind these specifics:
+* Remember the value you use for the **Dns Label Prefix**, as you'll use it to identify the device later.
+* For **VM Size**, enter *Standard_B1ms*.
+* For **Device Connection String**, enter the *primary connection string* you collected in the previous section while [setting up the IoT Edge device](#set-up-iot-edge-device).
-> [!NOTE]
-> If you choose to RDP into your Ubuntu VM, you can follow the instructions to [Install and configure xrdp to use Remote Desktop with Ubuntu](../virtual-machines/linux/use-remote-desktop.md).
+When you're finished setting up the values, select **Review + create**. This will create the new VM, which you can find in the Azure portal under a resource name with the prefix *vm-* (the exact name is generated randomly).
-#### Install IoT Edge container
+Once the installation completes, connect to the new gateway VM.
-Follow the instructions to [Install IoT Edge on Linux](../iot-edge/how-to-provision-single-device-linux-symmetric.md).
-
-Once the installation completes, run the following command to verify the status of your installation:
+Run the following command on the VM to verify the status of your IoT Edge installation:
```bash
-admin@gateway:~$ sudo iotedge check
+sudo iotedge check
``` This command will run several tests to make sure your installation is ready to go.
-#### Install OPC Publisher module
-
-Next, install the OPC Publisher module on your gateway device.
-
-Start by getting the module from the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/microsoft_iot.iotedge-opc-publisher).
+> [!NOTE]
+> You may see one error in the output related to IoT Edge Hub:
+>
+> **× production readiness: Edge Hub's storage directory is persisted on the host filesystem - Error**
+>
+> **Could not check current state of edgeHub container**
+>
+> This error is expected on a newly provisioned device because the IoT Edge Hub module isn't yet running. After the next step of installing the OPC Publisher module, this error will disappear.
+#### Install OPC Publisher module
-Then, follow the installation steps documented in the [OPC Publisher GitHub Repo](https://github.com/Azure/iot-edge-opc-publisher) to install the module on your Ubuntu VM.
+Next, install the OPC Publisher module on your gateway device. You'll complete the first part of this section on your main working device (not the gateway device), and you'll provide information about the gateway device where the module should be installed.
-In the step for [specifying container create options](https://github.com/Azure/iot-edge-opc-publisher#specifying-container-create-options-in-the-azure-portal), make sure to add the following json:
+Follow the installation steps documented in the [OPC Publisher GitHub Repo](https://github.com/Azure/Industrial-IoT/blob/main/docs/modules/publisher.md) to install the module on your gateway device VM. Here are some things to keep in mind during this process:
+* You'll be asked to connect to your IoT Hub and select the device where the module should be installed. You should see the **Dns label prefix** you created for the IoT Edge device earlier, and be able to select it as the gateway device.
+* In the step for [specifying container create options](https://github.com/Azure/Industrial-IoT/blob/main/docs/modules/publisher.md#specifying-container-create-options-in-the-azure-portal), add the following json:
-```JSON
-{
- "Hostname": "opcpublisher",
- "Cmd": [
- "--pf=/appdata/publishednodes.json",
- "--aa"
- ],
- "HostConfig": {
- "Binds": [
- "/iiotedge:/appdata"
- ]
+ ```JSON
+ {
+ "Hostname": "opcpublisher",
+ "Cmd": [
+ "--pf=/appdata/publishednodes.json",
+ "--aa"
+ ],
+ "HostConfig": {
+ "Binds": [
+ "/iiotedge:/appdata"
+ ]
+ }
}
-}
-```
+ ```
+* On the **Routes** tab for the module creation, create a route with the value _FROM /* INTO $upstream_.
+ :::image type="content" source="media/how-to-ingest-opcua-data/set-route.png" alt-text="Screenshot of the Azure portal creating the route for the device module.":::
+
+Follow the rest of the prompts to create the module.
>[!NOTE] >The create options above should work in most cases without any changes, but if you're using your own gateway device that's different from the article guidance so far, you may need to adjust the settings to your situation.
-Follow the rest of the prompts to create the module.
- After about 15 seconds, you can run the `iotedge list` command on your gateway device, which lists all the modules running on your IoT Edge device. You should see the OPCPublisher module up and running. :::image type="content" source="media/how-to-ingest-opcua-data/iotedge-list.png" alt-text="Screenshot of IoT Edge list results.":::
-Finally, go to the `/iiotedge` directory and create a *publishednodes.json* file. The IDs in the file need to match the `NodeId` values that you [gathered earlier from the OPC Server](#install-opc-ua-simulation-software). Your file should look like something like this:
+Finally, go to the `/iiotedge` directory on your gateway device, and create a *publishednodes.json* file. Use the following example file body to create your own similar file. Update the `EndpointUrl` value to match the connection address with the public IP value that you created while [installing the OPC UA simulation software](#install-opc-ua-simulation-software). Then, update the IDs in the file as needed to match the `NodeId` values that you [gathered earlier from the OPC Server](#install-opc-ua-simulation-software).
```JSON [
Finally, go to the `/iiotedge` directory and create a *publishednodes.json* file
Save your changes to the *publishednodes.json* file.
-Then, run the following command:
+Then, run the following command on the gateway device:
```bash sudo iotedge logs OPCPublisher -f
The command will result in the output of the OPC Publisher logs. If everything i
Data should now be flowing from an OPC UA Server into your IoT Hub.
-To monitor the messages flowing into Azure IoT hub, you can use the following command:
+To monitor the messages flowing into Azure IoT hub, you can run the following command for the Azure CLI on your main development machine:
```azurecli-interactive az iot hub monitor-events -n <iot-hub-instance> -t 0
dms Tutorial Sql Server Azure Sql Database Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-azure-sql-database-offline-ads.md
Last updated 09/28/2022
# Tutorial: Migrate SQL Server to an Azure SQL Database offline using Azure Data Studio with DMS (Preview)
+> [!NOTE]
+> Azure SQL Database targets are only available using the [Azure Data Studio Insiders](/sql/azure-data-studio/download-azure-data-studio#download-the-insiders-build-of-azure-data-studio) version of the Azure SQL Migration extension.
+ You can use the Azure SQL migration extension in Azure Data Studio to migrate the database(s) from a SQL Server instance to Azure SQL Database (Preview). In this tutorial, you'll learn how to migrate the **AdventureWorks2019** database from an on-premises instance of SQL Server to Azure SQL Database (Preview) by using the Azure SQL Migration extension for Azure Data Studio. This tutorial focuses on the offline migration mode that considers an acceptable downtime during the migration process.
dns Find Unhealthy Dns Records https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/scripts/find-unhealthy-dns-records.md
Title: Find unhealthy DNS records in Azure DNS - PowerShell script sample
description: In this article, learn how to use an Azure PowerShell script to find unhealthy DNS records. Previously updated : 09/27/2022 Last updated : 10/04/2022 # Find unhealthy DNS records in Azure DNS - PowerShell script sample
-The following Azure PowerShell script finds unhealthy DNS records in Azure DNS.
+The following Azure PowerShell script finds unhealthy DNS records in Azure DNS public zones.
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)] ```azurepowershell-interactive <#
- 1. Install Pre requisites Az PowerShell modules (https://learn.microsoft.com/powershell/azure/install-az-ps?view=azps-5.7.0)
- 2. From PowerShell prompt navigate to folder where the script is saved and run the following command
+ 1. Install Pre requisites Az PowerShell modules (https://learn.microsoft.com/powershell/azure/install-az-ps?view=azps-5.7.0)
+ 2. Sign in to your Azure Account using Login-AzAccount or Connect-AzAccount.
+ 3. From an elevated PowerShell prompt, navigate to folder where the script is saved and run the following command:
.\ Get-AzDNSUnhealthyRecords.ps1 -SubscriptionId <subscription id> -ZoneName <zonename>
- Replace subscription id with subscription id of interest.
- ZoneName with actual zone name.
+ Replace subscription id with the subscription id of interest.
+ Replace ZoneName with the actual zone name.
#> param( # subscription if to fetch dns records from
Foreach ($module in $AZModules) {
$WarningPreference = $StoreWarningPreference Write-Progress -Activity $ProgessActivity -Completed
-$context = Get-AzContext;
-if ($context.TokenCache -eq $null) {
- Write-host -ForegroundColor Yellow "Please Login to Azure Account using Login-AzAccount and run the script."
+$context = Get-AzAccessToken;
+if ($context.Token -eq $null) {
+ Write-host -ForegroundColor Yellow "Please sign in to your Azure Account using Login-AzAccount or Connect-AzAccount before running the script."
exit } $subscriptions = Get-AzSubscription
This script uses the following commands to create the deployment. Each item in t
| Command | Notes | |||
-| [Get-AzDnsZone](/powershell/module/az.dns/get-azdnszone) | Gets a DNS zone. |
+| [Get-AzDnsZone](/powershell/module/az.dns/get-azdnszone) | Gets an Azure public DNS zone. |
| [Get-AzDnsRecordSet](/powershell/module/az.dns/get-azdnsrecordset) | Gets a DNS record set. | ## Next steps
event-hubs Event Hubs Dedicated Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dedicated-overview.md
For more information about quotas and limits, see [Event Hubs quotas and limits]
Event Hubs dedicated clusters offer [availability zones](../availability-zones/az-overview.md#availability-zones) support where you can run event streaming workloads in physically separate locations within each Azure region that are tolerant to local failures. > [!IMPORTANT]
-> Event Hubs dedicated clusters require at least 8 Capacity Units(CUs) to enable availability zones. Clusters with self-serve scaling does not support availability zones yet. Availability zone support is only available in [Azure regions with availability zones](https://learn.microsoft.com/azure/availability-zones/az-overview#azure-regions-with-availability-zones).
+> Event Hubs dedicated clusters require at least 8 Capacity Units(CUs) to enable availability zones. Clusters with self-serve scaling does not support availability zones yet. Availability zone support is only available in [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones).
## How to onboard
event-hubs Event Hubs Premium Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-premium-overview.md
For more quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas
Event Hubs premium offers [availability zones](../availability-zones/az-overview.md#availability-zones) support with no extra cost. Using availability zones, you can run event streaming workloads in physically separate locations within each Azure region that are tolerant to local failures. > [!IMPORTANT]
-> Availability zone support is only available in [Azure regions with availability zones](https://learn.microsoft.com/azure/availability-zones/az-overview#azure-regions-with-availability-zones).
+> Availability zone support is only available in [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones).
## Premium vs. dedicated tiers
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md
While FastPath supports most configurations, it doesn't support the following fe
The following FastPath features are in Public preview:
-**VNet Peering** - FastPath will send traffic directly to any VM deployed in a virtual network peered to the one connected to ExpressRoute, bypassing the ExpressRoute virtual network gateway. This preview is available for both IPv4 and IPv6 connectivity.
+### Virtual network (Vnet) Peering
+FastPath will send traffic directly to any VM deployed in a virtual network peered to the one connected to ExpressRoute, bypassing the ExpressRoute virtual network gateway. This feature is available for both IPv4 and IPv6 connectivity.
-Available in all regions. This preview doesn't support FastPath connectivity to Azure Dedicated workloads.
+**FastPath support for vnet peering is only available for ExpressRoute Direct connections.**
-**User Defined Routes (UDRs)** - FastPath will honor UDRs configured on the GatewaySubnet and send traffic directly to an Azure Firewall or third party NVA.
+> [!NOTE]
+> * FastPath Vnet peering connectivity is not supported for Azure Dedicated Host workloads.
+
+## User Defined Routes (UDRs)
+FastPath will honor UDRs configured on the GatewaySubnet and send traffic directly to an Azure Firewall or third party NVA.
-Available in all regions. This preview doesn't support FastPath connectivity to Azure Dedicated workloads.
+**FastPath support for UDRs is only available for ExpressRoute Direct connections**
+
+> [!NOTE]
+> * FastPath UDR connectivity is not supported for Azure Dedicated Host workloads.
+> * FastPath UDR connectivity is not supported for IPv6 workloads.
**Private Link Connectivity for 10Gbps ExpressRoute Direct Connectivity** - Private Link traffic sent over ExpressRoute FastPath will bypass the ExpressRoute virtual network gateway in the data path. This preview is available in the following Azure Regions.
expressroute Expressroute Howto Linkvnet Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-arm.md
$connection = Get-AzVirtualNetworkGatewayConnection -Name "MyConnection" -Resour
$connection.ExpressRouteGatewayBypass = $True Set-AzVirtualNetworkGatewayConnection -VirtualNetworkGatewayConnection $connection ``` - ### FastPath and Private Link for 100 Gbps ExpressRoute Direct With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypasses the ExpressRoute virtual network gateway in the data path. This is Generally Available for connections associated to 100 Gb ExpressRoute Direct circuits. To enable this, follow the below guidance:
Register-AzProviderFeature -FeatureName ExpressRoutePrivateEndpointGatewayBypass
## Enroll in ExpressRoute FastPath features (preview)
-FastPath support for virtual network peering is now in Public preview, both IPv4 and IPv6 scenarios are supported. IPv4 FastPath and VNet peering can be enabled on connections associated to both ExpressRoute Direct and ExpressRoute Partner circuits. IPv6 FastPath support for VNet peering is limited to connections associated to ExpressRoute Direct.
- ### FastPath virtual network peering and user defined routes (UDRs). With FastPath and virtual network peering, you can enable ExpressRoute connectivity directly to VMs in a local or peered virtual network, bypassing the ExpressRoute virtual network gateway in the data path. With FastPath and UDR, you can configure a UDR on the GatewaySubnet to direct ExpressRoute traffic to an Azure Firewall or third party NVA. FastPath will honor the UDR and send traffic directly to the target Azure Firewall or NVA, bypassing the ExpressRoute virtual network gateway in the data path.
+To enroll in the preview, send an email to **exrpm@microsoft.com**, providing the following information:
+* Azure Subscription ID
+* Virtual Network (VNet) Resource ID
+* ExpressRoute Circuit Resource ID
+
+**FastPath support for virtual network peering and UDRs is only available for ExpressRoute Direct connections**.
+ > [!NOTE]
-> The previews for virtual network peering and user defined routes (UDRs) are offered together. You cannot enable only one scenario.
->
+> * Virtual network peering and UDR support is enabled by default for all new FastPath connections
+> * To enable virtual network peering and UDR support for FastPath connections configured before 9/19/2022, disable and enable FastPath on the target connection.
-To enroll in these previews, send an email to exrpm@microsoft.com and include the following information:
-* Subscription ID
-* Service key of the target ExpressRoute circuit
-* Name and Resource Group/ARM resource ID of the target virtual network(s)
### FastPath and Private Link for 10 Gbps ExpressRoute Direct With FastPath and Private Link, Private Link traffic sent over ExpressRoute bypasses the ExpressRoute virtual network gateway in the data path. This preview supports connections associated to 10 Gbps ExpressRoute Direct circuits. This preview doesn't support ExpressRoute circuits managed by an ExpressRoute partner.
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Taipei** | Chief Telecom | 2 | n/a | Supported | Chief Telecom, Chunghwa Telecom, FarEasTone | | **Tel Aviv** | Bezeq International | 2 | n/a | Supported | | | **Tokyo** | [Equinix TY4](https://www.equinix.com/locations/asia-colocation/japan-colocation/tokyo-data-centers/ty4/) | 2 | Japan East | Supported | Aryaka Networks, AT&T NetBond, BBIX, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, Intercloud, Internet Initiative Japan Inc. - IIJ, Megaport, NTT Communications, NTT EAST, Orange, Softbank, Telehouse - KDDI, Verizon </br></br> |
-| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | Supported | AT TOKYO, China Unicom Global, Colt, Fibrenoire, IX Reach, Megaport, PCCW Global Limited, Tokai Communications |
+| **Tokyo2** | [AT TOKYO](https://www.attokyo.com/) | 2 | Japan East | Supported | AT TOKYO, China Unicom Global, Colt, Equinix, Fibrenoire, IX Reach, Megaport, PCCW Global Limited, Tokai Communications |
| **Tokyo3** | [NEC](https://www.nec.com/en/global/solutions/cloud/inzai_datacenter.html) | 2 | Japan East | Supported | | | **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | Supported | AT&T NetBond, Bell Canada, CenturyLink Cloud Connect, Cologix, Equinix, IX Reach Megaport, Telus, Verizon, Zayo | | **Toronto2** | [Allied REIT](https://www.alliedreit.com/property/905-king-st-w/) | 1 | Canada Central | Supported | |
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **du datamena** |Supported |Supported | Dubai2 | | **[eir](https://www.eirevo.ie/cloud-services/cloud-connectivity)** |Supported |Supported | Dublin| | **[Epsilon Global Communications](https://epsilontel.com/solutions/cloud-connect/)** |Supported |Supported | Hong Kong2, Singapore, Singapore2 |
-| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, Hong Kong2, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Paris2, Quebec City, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Toronto, Washington DC, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* |
+| **[Equinix](https://www.equinix.com/partners/microsoft-azure/)** |Supported |Supported | Amsterdam, Amsterdam2, Atlanta, Berlin, Bogota, Canberra2, Chicago, Dallas, Dubai2, Dublin, Frankfurt, Frankfurt2, Geneva, Hong Kong SAR, Hong Kong2, London, London2, Los Angeles*, Los Angeles2, Melbourne, Miami, Milan, New York, Osaka, Paris, Paris2, Quebec City, Rio de Janeiro, Sao Paulo, Seattle, Seoul, Silicon Valley, Singapore, Singapore2, Stockholm, Sydney, Tokyo, Tokyo2, Toronto, Washington DC, Zurich</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* |
| **Etisalat UAE** |Supported |Supported | Dubai | | **[euNetworks](https://eunetworks.com/services/solutions/cloud-connect/microsoft-azure-expressroute/)** |Supported |Supported | Amsterdam, Amsterdam2, Dublin, Frankfurt, London | | **[FarEasTone](https://www.fetnet.net/corporate/en/Enterprise.html)** |Supported |Supported | Taipei |
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md
Previously updated : 09/27/2022 Last updated : 10/04/2022
Policy analytics starts monitoring the flows in the DNAT, Network, and Applicati
You can disregard this error message if the policy was successfully updated.
+> [!TIP]
+> Policy Analytics has a dependency on both Log Analytics and Azure Firewall resource specific logging. Verify the Firewall is configured appropriately or follow the previous instructions. Be aware that logs take 60 minutes to appear after enabling them for the first time. This is because logs are aggregated in the backend every hour. You can check logs are configured appropriately by running a log analytics query on the resource specific tables such as **AZFWNetworkRuleAggregation**, **AZFWApplicationRuleAggregation**, and **AZFWNatRuleAggregation**.
+ ## Next steps To learn more about Azure Firewall, see [What is Azure Firewall?](overview.md).
frontdoor Front Door Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-custom-domain-https.md
Title: Tutorial - Configure HTTPS on a custom domain for Azure Front Door | Microsoft Docs
-description: In this tutorial, you learn how to enable and disable HTTPS on your Azure Front Door configuration for a custom domain.
+ Title: Tutorial - Configure HTTPS on a custom domain for Azure Front Door (classic) | Microsoft Docs
+description: In this tutorial, you learn how to enable and disable HTTPS on your Azure Front Door (classic) configuration for a custom domain.
documentationcenter: ''
Last updated 06/06/2022
-#Customer intent: As a website owner, I want to enable HTTPS on the custom domain in my Front Door so that my users can use my custom domain to access their content securely.
+#Customer intent: As a website owner, I want to enable HTTPS on the custom domain in my Front Door (classic) so that my users can use my custom domain to access their content securely.
-# Tutorial: Configure HTTPS on a Front Door custom domain
+# Tutorial: Configure HTTPS on a Front Door (classic) custom domain
-This tutorial shows how to enable the HTTPS protocol for a custom domain that's associated with your Front Door under the frontend hosts section. By using the HTTPS protocol on your custom domain (for example, https:\//www.contoso.com), you ensure that your sensitive data is delivered securely via TLS/SSL encryption when it's sent across the internet. When your web browser is connected to a web site via HTTPS, it validates the web site's security certificate and verifies it's issued by a legitimate certificate authority. This process provides security and protects your web applications from attacks.
+This tutorial shows how to enable the HTTPS protocol for a custom domain that's associated with your Front Door (classic) under the frontend hosts section. By using the HTTPS protocol on your custom domain (for example, https:\//www.contoso.com), you ensure that your sensitive data is delivered securely via TLS/SSL encryption when it's sent across the internet. When your web browser is connected to a web site via HTTPS, it validates the web site's security certificate and verifies it's issued by a legitimate certificate authority. This process provides security and protects your web applications from attacks.
Azure Front Door supports HTTPS on a Front Door default hostname, by default. For example, if you create a Front Door (such as `https://contoso.azurefd.net`), HTTPS is automatically enabled for requests made to `https://contoso.azurefd.net`. However, once you onboard the custom domain 'www.contoso.com' you'll need to additionally enable HTTPS for this frontend host.
You can use your own certificate to enable the HTTPS feature. This process is do
Register the service principal for Azure Front Door as an app in your Azure Active Directory (Azure AD) by using Azure PowerShell or the Azure CLI. > [!NOTE]
-> This action requires you to have Global Administrator permissions in Azure AD. The registration only needs to be performed **once per Azure AD tenant**.
+> * This action requires you to have Global Administrator permissions in Azure AD. The registration only needs to be performed **once per Azure AD tenant**.
+> * Azure Front Door (classic) has a different *Application Id* than Azure Front Door Standard/Premium tier.
##### Azure PowerShell
frontdoor How To Cache Purge Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-cache-purge-cli.md
Best practice is to make sure your users always obtain the latest copy of your a
[!INCLUDE [azure-cli-prepare-your-environment](../../../includes/azure-cli-prepare-your-environment.md)] * Review [Caching with Azure Front Door](../front-door-caching.md) to understand how caching works.
-* Have a functioning Azure Front Door profile. Refer [Create a Front Door - CLI](../create-front-door-cli.md)to learn how to create one.
+* Have a functioning Azure Front Door profile. Refer to [Create a Front Door - CLI](../create-front-door-cli.md) to learn how to create one.
## Configure cache purge
frontdoor How To Enable Private Link Storage Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-storage-account-cli.md
+
+ Title: 'Connect Azure Front Door Premium to a Storage Account origin with Private Link - Azure CLI'
+
+description: Learn how to connect your Azure Front Door Premium to a Storage Account privately - Azure CLI.
++++ Last updated : 10/04/2022+++
+# Connect Azure Front Door Premium to a Storage Account origin with Private Link with Azure CLI
+
+This article will guide you through how to configure Azure Front Door Premium tier to connect to your Storage Account privately using the Azure Private Link service with Azure CLI.
+++
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* Have a functioning Azure Front Door Premium profile, an endpoint and an origin group. For more information on how to create an Azure Front Door profile, see [Create a Front Door - CLI](../create-front-door-cli.md).
+* Have a functioning Storage Account that is also private. Refer this [doc](../../storage/common/storage-private-endpoints.md) to learn how to do the same.
+
+> [!NOTE]
+> Private endpoints requires your Storage Account to meet certain requirements. For more information, see [Using Private Endpoints for Azure Storage](../../storage/common/storage-private-endpoints.md).
+
+## Enable Private Link to a Storage Account in Azure Front Door Premium
+
+Run [az afd origin create](/cli/azure/afd/origin#az-afd-origin-create) to create a new Azure Front Door origin. Enter the following settings to configure the Storage Account you want Azure Front Door Premium to connect with privately. Notice the `private-link-location` must be in one of the [available regions](../private-link.md#region-availability) and the `private-link-sub-resource-type` must be **blob**.
+
+```azurecli-interactive
+az afd origin create --enabled-state Enabled \
+ --resource-group myRGFD \
+ --origin-group-name og1 \
+ --origin-name mystorageorigin \
+ --profile-name contosoAFD \
+ --host-name mystorage.blob.core.windows.net \
+ --origin-host-header mystorage.blob.core.windows.net \
+ --http-port 80 \
+ --https-port 443 \
+ --priority 1 \
+ --weight 500 \
+ --enable-private-link true \
+ --private-link-location EastUS \
+ --private-link-request-message 'AFD storage origin Private Link request.' \
+ --private-link-resource /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myRGFD/providers/Microsoft.Storage/storageAccounts/mystorage \
+ --private-link-sub-resource-type blob
+```
+
+## Approve Azure Front Door Premium private endpoint connection from Azure Storage
+
+1. Run [az network private-endpoint-connection list](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-list) to list the private endpoint connections for your storage account. Make note of the `Resource ID` of the private endpoint connection available in the first line of the output.
+
+ ```azurecli-interactive
+ az network private-endpoint-connection list --name mystorage --resource-group myRGFD --type Microsoft.Storage/storageAccounts
+ ```
+
+1. Run [az network private-endpoint-connection approve](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-approve) to approve the private endpoint connection
+
+ ```azurecli-interactive
+ az network private-endpoint-connection approve --id /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myRGFD/providers/Microsoft.Storage/storageAccounts/mystorage/privateEndpointConnections/mystorage.00000000-0000-0000-0000-000000000000
+ ```
+
+1. Once approved, it will take a few minutes for the connection to fully establish. You can now access your storage account from Azure Front Door Premium. Direct access to the storage account from the public internet gets disabled after private endpoint gets enabled.
+
+## Next steps
+
+Learn about [Private Link service with storage account](../../storage/common/storage-private-endpoints.md).
frontdoor How To Enable Private Link Web App Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-web-app-cli.md
+
+ Title: 'Connect Azure Front Door Premium to an App Service origin with Private Link - Azure CLI'
+
+description: Learn how to connect your Azure Front Door Premium to a webapp privately using Azure CLI.
++++ Last updated : 10/04/2022+++
+# Connect Azure Front Door Premium to an App Service origin with Private Link using Azure CLI
+
+This article will guide you through how to configure Azure Front Door Premium tier to connect to your App service privately using the Azure Private Link service with Azure CLI.
+++
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* Have a functioning Azure Front Door Premium profile, an endpoint and an origin group. For more information on how to create an Azure Front Door profile, see [Create a Front Door - CLI](../create-front-door-cli.md).
+* Have a functioning Web App that is also private. Refer this [doc](../../private-link/create-private-link-service-cli.md) to learn how to do the same.
+
+> [!NOTE]
+> Private endpoints requires your App Service plan or function hosting plan to meet some requirements. For more information, see [Using Private Endpoints for Azure Web App](../../app-service/networking/private-endpoint.md).
+
+## Enable Private Link to an App Service in Azure Front Door Premium
+
+Run [az afd origin create](/cli/azure/afd/origin#az-afd-origin-create) to create a new Azure Front Door origin. Enter the following settings to configure the App service you want Azure Front Door Premium to connect with privately. Notice the `private-link-location` must be in one of the [available regions](../private-link.md#region-availability) and the `private-link-sub-resource-type` must be **sites**.
+
+```azurecli-interactive
+az afd origin create --enabled-state Enabled \
+ --resource-group myRGFD \
+ --origin-group-name og1 \
+ --origin-name pvtwebapp \
+ --profile-name contosoAFD \
+ --host-name example.contoso.com \
+ --origin-host-header example.contoso.com \
+ --http-port 80 \
+ --https-port 443 \
+ --priority 1 \
+ --weight 500 \
+ --enable-private-link true \
+ --private-link-location EastUS \
+ --private-link-request-message 'AFD app service origin Private Link request.' \
+ --private-link-resource /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myRGFD/providers/Microsoft.Web/sites/webapp1/appServices\
+ --private-link-sub-resource-type sites
+```
+
+## Approve Azure Front Door Premium private endpoint connection from App Service
+
+1. Run [az network private-endpoint-connection list](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-list) to list the private endpoint connections for your web app. Note down the `Resource ID` of the private endpoint connection available in the first line of the output.
+
+ ```azurecli-interactive
+ az network private-endpoint-connection list --name webapp1 --resource-group myRGFD --type Microsoft.Web/sites
+ ```
+
+1. Run [az network private-endpoint-connection approve](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-approve) to approve the private endpoint connection
+
+ ```azurecli-interactive
+ az network private-endpoint-connection approve --id /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myRGFD/providers/Microsoft.Web/sites/webapp1/privateEndpointConnections/00000000-0000-0000-0000-000000000000
+ ```
+
+1. Once approved, it will take a few minutes for the connection to fully establish. You can now access your app service from Azure Front Door Premium. Direct access to the App Service from the public internet gets disabled after private endpoint gets enabled.
+
+## Next steps
+
+Learn about [Private Link service with App service](../../app-service/networking/private-endpoint.md).
frontdoor How To Enable Private Link Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-web-app.md
This article will guide you through how to configure Azure Front Door Premium ti
* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * Create a [Private Link](../../private-link/create-private-link-service-portal.md) service for your origin web servers.
-> [!Note]
+> [!NOTE]
> Private endpoints requires your App Service plan or function hosting plan to meet some requirements. For more information, see [Using Private Endpoints for Azure Web App](../../app-service/networking/private-endpoint.md). ## Sign in to Azure
In this section, you'll map the Private Link service to a private endpoint creat
| Setting | Value | | - | -- | | Name | Enter a name to identify this storage blog origin. |
- | Origin Type | Storage (Azure Blobs) |
+ | Origin Type | App services |
| Host name | Select the host from the dropdown that you want as an origin. | | Origin host header | You can customize the host header of the origin or leave it as default. | | HTTP port | 80 (default) |
hpc-cache Hpc Cache Add Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-add-storage.md
Title: Add storage to an Azure HPC Cache description: How to define storage targets so that your Azure HPC Cache can use your on-premises NFS system or Azure Blob containers for long-term file storage -+ Previously updated : 01/19/2022 Last updated : 10/05/2022 -+ # Add storage targets
The procedure to add a storage target is slightly different depending on the typ
## Size your cache correctly to support your storage targets
-The number of supported storage targets depends on the cache size, which is set when you create the cache. The cache capacity is a combination of throughput capacity (in GB/s) and storage capacity (in TB).
+When you create the cache, make sure you select the type and size that will support the number of storage targets you need.
+
+The number of supported storage targets depends on the cache type and the cache capacity. Cache capacity is a combination of throughput capacity (in GB/s) and storage capacity (in TB).
* Up to 10 storage targets - A standard cache with the smallest or medium cache storage value for your selected throughput can have a maximum of 10 storage targets.
- For example, if you choose 2GB/second throughput and don't choose the highest cache storage size, your cache supports a maximum of 10 storage targets.
+ For example, if you choose 2 GB/second throughput and don't choose the largest cache storage size (12 TB), your cache supports a maximum of 10 storage targets.
* Up to 20 storage targets -
- * All high-throughput caches (which have preconfigured cache storage sizes) can support up to 20 storage targets.
+ * All read-only high-throughput caches (which have preconfigured cache storage sizes) can support up to 20 storage targets.
* Standard caches can support up to 20 storage targets if you choose the highest available cache size for your selected throughput value. (If using Azure CLI, choose the highest valid cache size for your cache SKU.)
-Read [Set cache capacity](hpc-cache-create.md#set-cache-capacity) to learn more about throughput and cache size settings.
+Read [Choose cache type and capacity](hpc-cache-create.md#choose-cache-type-and-capacity) to learn more about throughput and cache size settings.
## Choose the correct storage target type
These three options cover most situations:
* **Greater than 15% writes** - This option speeds up both read and write performance.
- Client reads and client writes are both cached. Files in the cache are assumed to be newer than files on the back-end storage system. Cached files are only automatically checked against the files on back-end storage every eight hours. Modified files in the cache are written to the back-end storage system after they have been in the cache for 20 minutes with no other changes.
+ Client reads and client writes are both cached. Files in the cache are assumed to be newer than files on the back-end storage system. Cached files are only automatically checked against the files on back-end storage every eight hours. Modified files in the cache are written to the back-end storage system after they have been in the cache for an hour with no other changes.
Do not use this option if any clients mount the back-end storage volume directly, because there is a risk it will have outdated files.
hpc-cache Hpc Cache Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/hpc-cache-create.md
Title: Create an Azure HPC Cache description: How to create an Azure HPC Cache instance-+ Previously updated : 01/26/2022- Last updated : 10/03/2022+ ms.devlang: azurecli
In **Service Details**, set the cache name and these other attributes:
* Virtual network - You can select an existing one or create a new virtual network. * Subnet - Choose or create a subnet with at least 64 IP addresses (/24). This subnet must be used only for this Azure HPC Cache instance.
-## Set cache capacity
-<!-- referenced from GUI - update aka.ms/hpc-cache-iops link if you change this header text -->
+## Choose cache type and capacity
+<!-- referenced from GUI - update aka.ms/hpc-cache-iops link if you change this header text - also check for cross-reference from add storage article -->
-On the **Cache** page, you must set the capacity of your cache. The values set here determine how quickly your cache can service client requests and how much data it can hold.
+On the **Cache** page, specify the type and size of cache to create. These values determine your cache's capabilities, including:
-Capacity also affects the cache's cost, and how many storage targets it can support.
+* How quickly the cache can service client requests
+* How much data the cache can hold
+* Whether or not the cache supports read/write caching mode
+* How many storage targets it can have
+* The cache's cost
-Cache capacity is a combination of two values:
+First, choose the type of cache you want. Options include:
-* The maximum data transfer rate for the cache (throughput), in GB/second
-* The amount of storage allocated for cached data, in TB
+* **Read-write standard caching** - A flexible, general-purpose cache
+* **Read-only caching** - A high-throughput cache designed to minimize latency for file access
-![Screenshot of cache sizing page in the Azure portal.](media/hpc-cache-create-capacity.png)
+Read more about these cache type options below in [Choose the cache type for your needs](#choose-the-cache-type-for-your-needs).
+
+Second, select the cache's capacity. Cache capacity is a combination of two values:
+
+* **Maximum throughput** - The data transfer rate for the cache, in GB/second
+* **Cache size** - The amount of storage allocated for cached data, in TB
+
+![Screenshot of cache attributes page in the Azure portal. Fields for Cache type, Maximum throughput, and Cache size are filled in.](media/create-cache-type-and-capacity.png)
### Understand throughput and cache size
Azure HPC Cache manages which files are cached and pre-loaded to maximize cache
Choose a cache storage size that can comfortably hold the active set of working files, plus additional space for metadata and other overhead.
-Throughput and cache size also affect how many storage targets are supported for a particular cache. If you want to use more than 10 storage targets with your cache, you must choose the highest available cache storage size value available for your throughput size, or choose one of the high-throughput read-only configurations. Learn more in [Add storage targets](hpc-cache-add-storage.md#size-your-cache-correctly-to-support-your-storage-targets).
+Throughput and cache size also affect how many storage targets are supported for a particular cache. If you want to use more than 10 storage targets with your cache, you must choose the highest available cache storage size value available for your throughput size, or choose the high-throughput read-only configuration. Learn more in [Add storage targets](hpc-cache-add-storage.md#size-your-cache-correctly-to-support-your-storage-targets).
If you need help sizing your cache correctly, contact Microsoft Service and Support. ### Choose the cache type for your needs
-When you choose your cache capacity, you might notice that some throughput values have fixed cache sizes, and others let you select from multiple cache size options. This is because there are two different styles of cache infrastructure:
+When you choose your cache capacity, you might notice that some cache types have one fixed cache size, and others let you select from multiple cache size options for each throughput value. This is because they use different styles of cache infrastructure.
-* Standard caches - listed under **Read-write caching** in the throughput menu
+* Standard caches - Cache type **Read-write caching**
With standard caches, you can choose from several cache size values. These caches can be configured for read-only or for read and write caching.
-* High-throughput caches - listed under **Read-only caching** in the throughput menu
+* High-throughput caches - Cache type **Read-only caching**
- The high-throughput configurations have set cache sizes because they're preconfigured with NVME disks. They're designed to optimize file read access only.
+ The high-throughput read-only caches are preconfigured with only one cache size option per throughput value. They're designed to optimize file read access only.
-![Screenshot of maximum throughput menu in the portal. There are several size options under the heading "Read-write caching" and several under the heading "Read-only".](media/rw-ro-cache-sizing.png)
+![Screenshot of the Cache tab in the HPC Cache creation workflow. The Cache type field is filled with Read-write standard caching, and the Maximum throughput field is filled with Up to 4 GB/s. The Cache size menu is expanded and shows several selectable size options: 6 TB, 12 TB, and 24 TB.](media/cache-size-options.png)
This table explains some important differences between the two options.
-| Attribute | Standard cache | High-throughput cache |
+| Attribute | Standard cache | Read-only high-throughput cache |
|--|--|--|
-| Throughput menu category |"Read-write caching"| "Read-only caching"|
+| Cache type |"Read-write standard caching"| "Read-only caching"|
| Throughput sizes | 2, 4, or 8 GB/sec | 4.5, 9, or 16 GB/sec | | Cache sizes | 3, 6, or 12 TB for 2 GB/sec<br/> 6, 12, or 24 TB for 4 GB/sec<br/> 12, 24, or 48 TB for 8 GB/sec| 21 TB for 4.5 GB/sec <br/> 42 TB for 9 GB/sec <br/> 84 TB for 16 GB/sec | | Maximum number of storage targets | [10 or 20](hpc-cache-add-storage.md#size-your-cache-correctly-to-support-your-storage-targets) depending on cache size selection | 20 |
-| Compatible storage target types | Azure blob, on-premises NFS storage, NFS-enabled blob | on-premises NFS storage <br/>NFS-enabled blob storage is in preview for this combination |
+| Compatible storage target types | Azure Blob, on-premises NFS storage, NFS-enabled blob | on-premises NFS storage <br/>NFS-enabled blob storage is in preview for this combination |
| Caching styles | Read caching or read-write caching | Read caching only | | Cache can be stopped to save cost when not needed | Yes | No |
internet-peering Howto Peering Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-peering-service-portal.md
# Enable Azure Peering Service on a Direct peering by using the Azure portal
-This article describes how to enable Azure [Peering Service](overview-peering-service.md) on a Direct peering by using the Azure portal.
+This article describes how to enable [Azure Peering Service](/articles/peering-service/about.md) on a Direct peering by using the Azure portal.
If you prefer, you can complete this guide by using [PowerShell](howto-peering-service-powershell.md).
To modify connection settings, see the "Modify a Direct peering" section in [Cre
## Additional resources
-For frequently asked questions, see the [Peering Service FAQ](service-faqs.yml).
+For frequently asked questions, see the [Peering Service FAQ](service-faqs.yml).
internet-peering Howto Peering Service Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-peering-service-powershell.md
# Enable Azure Peering Service on a Direct peering by using PowerShell
-This article describes how to enable Azure [Peering Service](overview-peering-service.md) on a Direct peering by using PowerShell cmdlets and the Azure Resource Manager deployment model.
+This article describes how to enable [Azure Peering Service](/articles/peering-service/about.md) on a Direct peering by using PowerShell cmdlets and the Azure Resource Manager deployment model.
If you prefer, you can complete this guide by using the Azure [portal](howto-peering-service-portal.md).
iot-edge How To Create Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-iot-edge-device.md
To see more of the features of DPS, see the [Features section of the overview pa
## Choose an authentication method
-### Symmetric keys attestation
-
-Symmetric key attestation is a simple approach to authenticating a device. This attestation method represents a "Hello world" experience for developers who are new to device provisioning, or don't have strict security requirements.
-
-When you create a new device identity in IoT Hub, the service creates two keys. You place one of the keys on the device, and it presents the key to IoT Hub when authenticating.
-
-This authentication method is faster to get started but not as secure. Device provisioning using a TPM or X.509 certificates is more secure and should be used for solutions with more stringent security requirements.
- ### X.509 certificate attestation
-Using X.509 certificates as an attestation mechanism is an excellent way to scale production and simplify device provisioning. Typically, X.509 certificates are arranged in a certificate chain of trust. Starting with a self-signed or trusted root certificate, each certificate in the chain signs the next lower certificate. This pattern creates a delegated chain of trust from the root certificate down through each intermediate certificate to the final "leaf" certificate installed on a device.
+Using X.509 certificates as an attestation mechanism is the recommended way to scale production and simplify device provisioning. Typically, X.509 certificates are arranged in a certificate chain of trust. Starting with a self-signed or trusted root certificate, each certificate in the chain signs the next lower certificate. This pattern creates a delegated chain of trust from the root certificate down through each intermediate certificate to the final "leaf" certificate installed on a device.
-You create two X.509 identity certificates and place them on the device. When you create a new device identity in IoT Hub, you provide thumbprints from both certificates. When the device authenticates to IoT Hub, it presents one certificate and IoT Hub verifies that the certificate matches its thumbprint.
+You create two X.509 identity certificates and place them on the device. When you create a new device identity in IoT Hub, you provide thumbprints from both certificates. When the device authenticates to IoT Hub, it presents one certificate and IoT Hub verifies that the certificate matches its thumbprint. The X.509 keys on the device should be stored in a Hardware Security Module (HSM). For example, PKCS#11 modules, ATECC, dTPM, etc.
-This authentication method is more secure than symmetric keys and is recommended for production scenarios.
+This authentication method is more secure than symmetric keys and supports group enrollments which provides a simplified management experience for a high number of devices. This authentication method is recommended for production scenarios.
### Trusted platform module (TPM) attestation
-Using TPM attestation is the most secure method for device provisioning, as it provides authentication features in both software and hardware. Each TPM chip uses a unique endorsement key to verify its authenticity.
+Using TPM attestation is a method for device provisioning that uses authentication features in both software and hardware. Each TPM chip uses a unique endorsement key to verify its authenticity.
TPM attestation is only available for provisioning at-scale with DPS, and only supports individual enrollments not group enrollments. Group enrollments aren't available because of the device-specific nature of TPM.
TPM 2.0 is required when you use TPM attestation with the device provisioning se
This authentication method is more secure than symmetric keys and is recommended for production scenarios.
+### Symmetric keys attestation
+
+Symmetric key attestation is a simple approach to authenticating a device. This attestation method represents a "Hello world" experience for developers who are new to device provisioning, or don't have strict security requirements.
+
+When you create a new device identity in IoT Hub, the service creates two keys. You place one of the keys on the device, and it presents the key to IoT Hub when authenticating.
+
+This authentication method is faster to get started but not as secure. Device provisioning using a TPM or X.509 certificates is more secure and should be used for solutions with more stringent security requirements.
+ ## Next steps You can use the table of contents to navigate to the appropriate end-to-end guide for creating an IoT Edge device for your IoT Edge solution's platform, provisioning, and authentication requirements.
iot-edge How To Provision Devices At Scale Linux Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-tpm.md
The tasks are as follows:
A physical Linux device to be the IoT Edge device.
+If you are a device manufacturer then refer to guidance on [integrating a TPM into the manufacturing process](../iot-dps/concepts-device-oem-security-practices.md#integrating-a-tpm-into-the-manufacturing-process).
+ # [Virtual machine](#tab/virtual-machine) A Windows development machine with [Hyper-V enabled](/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v). This article uses Windows 10 running an Ubuntu Server VM.
After the installation is finished and you've signed back in to your VM, you're
## Retrieve provisioning information for your TPM
+<!-- 1.1 -->
In this section, you build a tool that you can use to retrieve the registration ID and endorsement key for your TPM. 1. Sign in to your device, and then follow the steps in [Set up a Linux development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md#linux) to install and build the Azure IoT device SDK for C.
In this section, you build a tool that you can use to retrieve the registration
1. The output window displays the device's **Registration ID** and the **Endorsement key**. Copy these values for use later when you create an individual enrollment for your device in the device provisioning service.
-> [!TIP]
-> If you don't want to use the SDK tool to retrieve the information, you need to find another way to obtain the provisioning information. The endorsement key, which is unique to each TPM chip, is obtained from the TPM chip manufacturer associated with it. You can derive a unique registration ID for your TPM device. For example, you can create an SHA-256 hash of the endorsement key.
+<!-- end 1.1 -->
+
+<!-- iotedge-1.4 -->
+
+> [!NOTE]
+> This article previously used the `tpm_device_provision` tool from the IoT C SDK to generate provisioning info. If you relied on that tool previously, then be aware the steps below generate a different registration ID for the same public endorsement key. If you need to recreate the registration ID as before then refer to how the C SDK's [tpm_device_provision tool](https://github.com/Azure/azure-iot-sdk-c/tree/main/provisioning_client/tools/tpm_device_provision) generates it. Be sure the registration ID for the individual enrollment in DPS matches the regisration ID the IoT Edge device is configured to use.
+
+In this section, you use the TPM2 software tools to retrieve the endorsement key for your TPM and then generate a unique registration ID. This section corresponds with [Step 3: Device has firmware and software installed](../iot-dps/concepts-device-oem-security-practices.md#step-3-device-has-firmware-and-software-installed) in the process for [integrating a TPM into the manufacturing process](../iot-dps/concepts-device-oem-security-practices.md#integrating-a-tpm-into-the-manufacturing-process).
+
+### Install the TPM2 Tools
+Sign in to your device, and install the `tpm2-tools` package.
+
+# [Ubuntu / Debian / Raspberry Pi OS](#tab/ubuntu+debian+rpios)
++
+ ```bash
+ sudo apt-get install tpm2-tools
+ ```
+
+# [Red Hat Enterprise Linux](#tab/rhel)
++
+ ```bash
+ sudo yum install tpm2-tools
+ ```
+++
+Run the following script to read the endorsement key, creating one if it does not already exist.
+
+ ```bash
+ #!/bin/sh
+ if [ "$USER" != "root" ]; then
+ SUDO="sudo "
+ fi
+
+ $SUDO tpm2_readpublic -Q -c 0x81010001 -o ek.pub 2>
+ if [ $? -gt 0 ]; then
+ # Create the endorsement key (EK)
+ $SUDO tpm2_createek -c 0x81010001 -G rsa -u ek.pub
+
+ # Create the storage root key (SRK)
+ $SUDO tpm2_createprimary -Q -C o -c srk.ctx >
+
+ # make the SRK persistent
+ $SUDO tpm2_evictcontrol -c srk.ctx 0x81000001 >
+
+ # open transient handle space for the TPM
+ $SUDO tpm2_flushcontext -t >
+ fi
+
+ printf "Gathering the registration information...\n\nRegistration Id:\n%s\n\nEndorsement Key:\n%s\n" $(sha256sum -b ek.pub | cut -d' ' -f1 | sed -e 's/[^[:alnum:]]//g') $(base64 -w0 ek.pub)
+ $SUDO rm ek.pub srk.ctx 2>
+
+ ```
+
+The output window displays the device's **Endorsement key** and a unique **Registration ID**. Copy these values for use later when you create an individual enrollment for your device in the device provisioning service.
+
+<!-- end iotedge-1.4 -->
After you have your registration ID and endorsement key, you're ready to continue.
+> [!TIP]
+> If you don't want to use the TPM2 software tools to retrieve the information, you need to find another way to obtain the provisioning information. The endorsement key, which is unique to each TPM chip, is obtained from the TPM chip manufacturer associated with it. You can derive a unique registration ID for your TPM device. For example, as shown above you can create an SHA-256 hash of the endorsement key.
+ <!-- Create an enrollment for your device using TPM provisioning information H2 and content --> [!INCLUDE [tpm-create-a-device-provision-service-enrollment.md](../../includes/tpm-create-a-device-provision-service-enrollment.md)]
iot-edge How To Provision Devices At Scale Linux X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-x509.md
The tasks are as follows:
Using X.509 certificates as an attestation mechanism is an excellent way to scale production and simplify device provisioning. Typically, X.509 certificates are arranged in a certificate chain of trust. Starting with a self-signed or trusted root certificate, each certificate in the chain signs the next lower certificate. This pattern creates a delegated chain of trust from the root certificate down through each intermediate certificate to the final "leaf" certificate installed on a device.
+> [!TIP]
+> If your device has a Hardware Security Module (HSM) such as a TPM 2.0, then we recommend storing the X.509 keys securely in the HSM. Learn more about how to implement the zero-touch provisioning at scale described in [this blueprint](https://azure.microsoft.com/blog/the-blueprint-to-securely-solve-the-elusive-zerotouch-provisioning-of-iot-devices-at-scale) with the [iotedge-tpm2cloud](https://aka.ms/iotedge-tpm2cloud) sample.
+ ## Prerequisites <!-- Cloud resources prerequisites H3 and content -->
iot-hub Iot Hub Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-distributed-tracing.md
These instructions are for building the sample on Windows. For other environment
Replace the value of the `connectionString` constant with the device connection string you saved in the [register a device](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c#register-a-device) section of the [Send telemetry C Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c).
-1. Change the `MESSAGE_COUNT` define to `5000`:
-
- :::code language="c" source="~/samples-iot-distributed-tracing/iothub_ll_telemetry_sample-c/iothub_ll_telemetry_sample.c" range="56-60" highlight="3":::
- 1. Find the line of code that calls `IoTHubDeviceClient_LL_SetConnectionStatusCallback` to register a connection status callback function before the send message loop. Add code under that line as shown below to call `IoTHubDeviceClient_LL_EnablePolicyConfiguration` enabling distributed tracing for the device: :::code language="c" source="~/samples-iot-distributed-tracing/iothub_ll_telemetry_sample-c/iothub_ll_telemetry_sample.c" range="144-152" highlight="5":::
key-vault About Keys Secrets Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/about-keys-secrets-certificates.md
Refer to the JOSE specifications for relevant data types for keys, encryption, a
Objects stored in Key Vault are versioned whenever a new instance of an object is created. Each version is assigned a unique identifier and URL. When an object is first created, it's given a unique version identifier and marked as the current version of the object. Creation of a new instance with the same object name gives the new object a unique version identifier, causing it to become the current version.
-Objects in Key Vault can be addressed by specifying a version or by omitting version for operations on current version of the object. For example, given a Key with the name `MasterKey`, performing operations without specifying a version causes the system to use the latest available version. Performing operations with the version-specific identifier causes the system to use that specific version of the object.
+Objects in Key Vault can be retrived by specifying a version or by omitting version to get latest version of the object. Performing operations on objects requires providing version to use specific version of the object.
> [!NOTE] > The values you provide for Azure resource or object IDs may be copied globally for the purpose of running the service. The value provided should not include personally identifiable or sensitive information.
lab-services Administrator Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/administrator-guide.md
When you're assigning roles, it helps to follow these tips:
Your school may need to do content filtering to prevent students from accessing inappropriate websites. For example, to comply with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act). Lab Services doesn't offer built-in support for content filtering.
-There are two approaches that schools typically consider for content filtering:
--- Configure a firewall to filter content at the network level.-- Install third-party software directly on each computer that performs content filtering.
+Schools typically approach content filtering by installing third-party software that performs content filtering on each computer. Azure Lab Services does not support network-level filtering.
By default, Azure Lab Services hosts each lab's virtual network within a Microsoft-managed Azure subscription. You'll need to use [advanced networking](how-to-connect-vnet-injection.md) in the lab plan. Make sure to check known limitations of VNet injection before proceeding.
For more information about setting up and managing labs, see:
- [Configure a lab plan](lab-plan-setup-guide.md) - [Configure a lab](setup-guide.md)-- [Manage costs for labs](cost-management-guide.md)
+- [Manage costs for labs](cost-management-guide.md)
load-balancer Load Balancer Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-basic-upgrade-guidance.md
Last updated 09/19/2022
# Upgrading from basic Load Balancer - Guidance
+>[!Important]
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement(https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/)]. If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. This article will help guide you through the upgrade process.
+ In this article, we'll discuss guidance for upgrading your Basic Load Balancer instances to Standard Load Balancer. Standard Load Balancer is recommended for all production instances and provides many [key differences](#basic-load-balancer-sku-vs-standard-load-balancer-sku) to your infrastructure.+ ## Steps to complete the upgrade We recommend the following approach for upgrading to Standard Load Balancer:
load-balancer Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/skus.md
# Azure Load Balancer SKUs
+>[!Important]
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement(https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/)]. If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. See [Upgrading from Basic Load Balancer - Guidance](load-balancer-basic-upgrade-guidance.md) for upgrade guidance.
+ Azure Load Balancer has three SKUs. ## <a name="skus"></a> SKU comparison
load-balancer Upgrade Basic Standard Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard-virtual-machine-scale-sets.md
Last updated 09/22/2022
# Upgrade a basic load balancer used with Virtual Machine Scale Sets+
+>[!Important]
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement(https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/)]. If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. See [Upgrading from Basic Load Balancer - Guidance](load-balancer-basic-upgrade-guidance.md) for upgrade guidance.
+ [Azure Standard Load Balancer](load-balancer-overview.md) offers a rich set of functionality and high availability through zone redundancy. To learn more about Load Balancer SKU, see [comparison table](./skus.md#skus). This article introduces a PowerShell module that creates a Standard Load Balancer with the same configuration as the Basic Load Balancer along with the associated Virtual Machine Scale Set.
load-balancer Upgrade Basic Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard.md
# Upgrade from a basic public to standard public load balancer
+>[!Important]
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement(https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/)]. If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. See [Upgrading from Basic Load Balancer - Guidance](load-balancer-basic-upgrade-guidance.md) for upgrade guidance.
+ [Azure Standard Load Balancer](load-balancer-overview.md) offers a rich set of functionality and high availability through zone redundancy. To learn more about Azure Load Balancer SKUs, see [comparison table](./skus.md#skus). There are two stages in an upgrade:
load-balancer Upgrade Basicinternal Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basicInternal-standard.md
# Upgrade Azure Internal Load Balancer- No Outbound Connection Required+
+>[!Important]
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement(https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/)]. If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. See [Upgrading from Basic Load Balancer - Guidance](load-balancer-basic-upgrade-guidance.md) for upgrade guidance.
+ [Azure Standard Load Balancer](load-balancer-overview.md) offers a rich set of functionality and high availability through zone redundancy. To learn more about Load Balancer SKU, see [comparison table](./skus.md#skus). This article introduces a PowerShell script that creates a Standard Load Balancer with the same configuration as the Basic Load Balancer along with migrating traffic from Basic Load Balancer to Standard Load Balancer.
load-balancer Upgrade Internalbasic To Publicstandard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-internalbasic-to-publicstandard.md
# Upgrade an internal basic load balancer - Outbound connections required
+>[!Important]
+>On September 30, 2025, Basic Load Balancer will be retired. For more information, see the [official announcement(https://azure.microsoft.com/updates/azure-basic-load-balancer-will-be-retired-on-30-september-2025-upgrade-to-standard-load-balancer/)]. If you are currently using Basic Load Balancer, make sure to upgrade to Standard Load Balancer prior to the retirement date. See [Upgrading from Basic Load Balancer - Guidance](load-balancer-basic-upgrade-guidance.md) for upgrade guidance.
+ A standard [Azure Load Balancer](load-balancer-overview.md) offers increased functionality and high availability through zone redundancy. For more information about Azure Load Balancer SKUs, see [Azure Load Balancer SKUs](./skus.md#skus). A standard internal Azure Load Balancer doesn't provide outbound connectivity. The PowerShell script in this article, migrates the basic load balancer configuration to a standard public load balancer. There are four stages in the upgrade:
logic-apps Logic Apps Gateway Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-gateway-install.md
ms.suite: integration Previously updated : 08/20/2022 Last updated : 10/04/2022 #Customer intent: As a software developer, I want to install and set up the on-premises data gateway so that I can create logic app workflows that can access data in on-premises systems.
This article shows how to download, install, and set up your on-premises data ga
> To continue using your Azure Government account, but set up the gateway to work in the global multi-tenant Azure Commercial cloud instead, first sign > in during gateway installation with the `prod@microsoft.com` username. This solution forces the gateway to use the global multi-tenant Azure cloud, > but still lets you continue using your Azure Government account.
+ >
+ > The Azure gateway resource, which you create later, and your logic app resource must use the same Azure subscription, although these resources can exist in different resource groups.
+
+ * Your logic app resource and the Azure gateway resource, which you create after you install the gateway, must use the same Azure subscription. However, these resources can exist in different Azure resource groups.
* If you're updating your gateway installation, uninstall your current gateway first for a cleaner experience.
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
The following table highlights the key differences between managed online endpoi
| **Virtual Network (VNET)** | [Supported](how-to-secure-online-endpoint.md) (preview) | Supported | | **View costs** | [Endpoint and deployment level](how-to-view-online-endpoints-costs.md) | Cluster level | | **Mirrored traffic** | [Supported](how-to-safely-rollout-managed-endpoints.md#test-the-deployment-with-mirrored-traffic-preview) | Unsupported |
-| **No-code deployment** | Supported [MLflow](how-to-deploy-mlflow-models-online-endpoints.md) and [Triton](how-to-deploy-with-triton.md) models | Supported [MLflow](how-to-deploy-mlflow-models-online-endpoints.md) and [Triton](how-to-deploy-with-triton.md) models |
+| **No-code deployment** | Supported ([MLflow](how-to-deploy-mlflow-models-online-endpoints.md) and [Triton](how-to-deploy-with-triton.md) models) | Supported ([MLflow](how-to-deploy-mlflow-models-online-endpoints.md) and [Triton](how-to-deploy-with-triton.md) models) |
### Managed online endpoints
machine-learning Dsvm Tools Data Platforms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-data-platforms.md
keywords: data science tools, data science virtual machine, tools for data scien
--++ Previously updated : 04/29/2021 Last updated : 10/04/2022
The following data platform tools are supported on the DSVM.
| Category | Value | | - | - | | What is it? | A local relational database instance |
-| Supported DSVM editions | Windows 2019, Ubuntu 18.04 (SQL Server 2019) |
+| Supported DSVM editions | Windows 2019, Linux (SQL Server 2019) |
| Typical uses | <ul><li>Rapid development locally with smaller dataset</li><li>Run In-database R</li></ul> | | Links to samples | <ul><li>A small sample of a New York City dataset is loaded into the SQL database:<br/> `nyctaxi`</li><li>Jupyter sample showing Microsoft Machine Learning Server and in-database analytics can be found at:<br/> `~notebooks/SQL_R_Services_End_to_End_Tutorial.ipynb`</li></ul> | | Related tools on the DSVM | <ul><li>SQL Server Management Studio</li><li>ODBC/JDBC drivers</li><li>pyodbc, RODBC</li></ul> |
Libraries to access data from Azure Blob storage or Azure Data Lake Storage, usi
For the Spark instance on the DSVM to access data stored in Blob storage or Azure Data Lake Storage, you must create and configure the `core-site.xml` file based on the template found in $SPARK_HOME/conf/core-site.xml.template. You must also have the appropriate credentials to access Blob storage and Azure Data Lake Storage. (Note that the template files use placeholders for Blob storage and Azure Data Lake Storage configurations.)
-For more detailed info about creating Azure Data Lake Storage service credentials, see [Authentication with Azure Data Lake Storage Gen1](../../data-lake-store/data-lake-store-service-to-service-authenticate-using-active-directory.md). After the credentials for Blob storage or Azure Data Lake Storage are entered in the core-site.xml file, you can reference the data stored in those sources through the URI prefix of wasb:// or adl://.
+For more detailed info about creating Azure Data Lake Storage service credentials, see [Authentication with Azure Data Lake Storage Gen1](../../data-lake-store/data-lake-store-service-to-service-authenticate-using-active-directory.md). After the credentials for Blob storage or Azure Data Lake Storage are entered in the core-site.xml file, you can reference the data stored in those sources through the URI prefix of wasb:// or adl://.
machine-learning Dsvm Tools Deep Learning Frameworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-deep-learning-frameworks.md
Deep learning frameworks on the DSVM are listed below.
| Category | Value | |--|--| | Version(s) supported | 11 |
-| Supported DSVM editions | Windows Server 2019<br>Ubuntu 18.04 |
+| Supported DSVM editions | Windows Server 2019<br>Linux |
| How is it configured / installed on the DSVM? | _nvidia-smi_ is available on the system path. | | How to run it | Open a command prompt (on Windows) or a terminal (on Linux), and then run _nvidia-smi_. | ## [Horovod](https://github.com/uber/horovod)
Deep learning frameworks on the DSVM are listed below.
| Category | Value | | - | - | | Version(s) supported | 0.21.3|
-| Supported DSVM editions | Ubuntu 18.04 |
+| Supported DSVM editions | Linux |
| How is it configured / installed on the DSVM? | Horovod is installed in Python 3.5 | | How to run it | Activate the correct environment at the terminal, and then run Python. |
Deep learning frameworks on the DSVM are listed below.
| Category | Value | |--|--| | Version(s) supported | |
-| Supported DSVM editions | Windows Server 2019<br>Ubuntu 18.04 |
+| Supported DSVM editions | Windows Server 2019<br>Linux |
| What is it for? | NVIDIA tool for querying GPU activity | | How is it configured / installed on the DSVM? | `nvidia-smi` is on the system path. | | How to run it | On a virtual machine **with GPU's**, open a command prompt (on Windows) or a terminal (on Linux), and then run `nvidia-smi`. |
Deep learning frameworks on the DSVM are listed below.
| Category | Value | |--|--|
-| Version(s) supported | 1.9.0 (Ubuntu 18.04, Windows 2019) |
-| Supported DSVM editions | Windows Server 2019<br>Ubuntu 18.04 |
+| Version(s) supported | 1.9.0 (Linux, Windows 2019) |
+| Supported DSVM editions | Windows Server 2019<br>Linux |
| How is it configured / installed on the DSVM? | Installed in Python, conda environments 'py38_default', 'py38_pytorch' | | How to run it | Terminal: Activate the correct environment, and then run Python.<br/>* [JupyterHub](dsvm-ubuntu-intro.md#how-to-access-the-ubuntu-data-science-virtual-machine): Connect, and then open the PyTorch directory for samples. |
Deep learning frameworks on the DSVM are listed below.
| Category | Value | |--|--| | Version(s) supported | 2.5 |
-| Supported DSVM editions | Windows Server 2019<br>Ubuntu 18.04 |
+| Supported DSVM editions | Windows Server 2019<br>Linux |
| How is it configured / installed on the DSVM? | Installed in Python, conda environments 'py38_default', 'py38_tensorflow' | | How to run it | Terminal: Activate the correct environment, and then run Python. <br/> * Jupyter: Connect to [Jupyter](provision-vm.md) or [JupyterHub](dsvm-ubuntu-intro.md#how-to-access-the-ubuntu-data-science-virtual-machine), and then open the TensorFlow directory for samples. |
machine-learning Dsvm Tools Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-development.md
The Data Science Virtual Machine (DSVM) bundles several popular tools in a highl
| Category | Value | |--|--| | What is it? | Client IDE for Python language |
-| Supported DSVM versions | Windows 2019, Ubuntu 18.04 |
+| Supported DSVM versions | Windows 2019, Linux |
| Typical uses | Python development | | How to use and run it | Desktop shortcut (`C:\Program Files\tk`) on Windows. Desktop shortcut (`/usr/bin/pycharm`) on Linux |
machine-learning Dsvm Tools Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-languages.md
artificial intelligence (AI) applications. Here are some of the notable ones.
| Category | Value | |--|--| | Language versions supported | Python 3.8 |
-| Supported DSVM editions | Windows Server 2019, Ubuntu 18.04 |
+| Supported DSVM editions | Windows Server 2019, Linux |
| How is it configured / installed on the DSVM? | There is multiple `conda` environments whereby each of these has different Python packages pre-installed. To list all available environments in your machine, run `conda env list`. | ### How to use and run it
machine-learning Dsvm Tools Productivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-productivity.md
Last updated 05/12/2021
In addition to the data science and programming tools, the DSVM contains productivity tools to help you capture and share insights with your colleagues. Microsoft 365 is the most productive and most secure Office experience for enterprises, allowing your teams to work together seamlessly from anywhere, anytime. With Power BI Desktop you can go from data to insight to action. And the Microsoft Edge browser is a modern, fast, and secure Web browser.
-| Tool | Windows 2019 Server DSVM | Ubuntu 18.04 DSVM | Usage notes |
+| Tool | Windows 2019 Server DSVM | Linux DSVM | Usage notes |
|--|:-:|:-:|:-| | [Microsoft 365](https://www.microsoft.com/microsoft-365) (Word, Excel, PowerPoint) | <span class='green-check'>&#9989; </span> | <span class='red-x'>&#10060; </span> | | | [Microsoft Teams](https://www.microsoft.com/microsoft-teams) | <span class='green-check'>&#9989; </span> | <span class='red-x'>&#10060; </span> | |
machine-learning Dsvm Tutorial Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tutorial-bicep.md
# Quickstart: Create an Ubuntu Data Science Virtual Machine using Bicep
-This quickstart will show you how to create an Ubuntu 18.04 Data Science Virtual Machine using Bicep. Data Science Virtual Machines are cloud-based virtual machines preloaded with a suite of data science and machine learning frameworks and tools. When deployed on GPU-powered compute resources, all tools and libraries are configured to use the GPU.
+This quickstart will show you how to create an Ubuntu Data Science Virtual Machine using Bicep. Data Science Virtual Machines are cloud-based virtual machines preloaded with a suite of data science and machine learning frameworks and tools. When deployed on GPU-powered compute resources, all tools and libraries are configured to use the GPU.
[!INCLUDE [About Bicep](../../../includes/resource-manager-quickstart-bicep-introduction.md)]
The following resources are defined in the Bicep file:
* [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks) * [Microsoft.Network/publicIPAddresses](/azure/templates/microsoft.network/publicipaddresses) * [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts)
-* [Microsoft.Compute/virtualMachines](/azure/templates/microsoft.compute/virtualmachines): Create a cloud-based virtual machine. In this template, the virtual machine is configured as a Data Science Virtual Machine running Ubuntu 18.04.
+* [Microsoft.Compute/virtualMachines](/azure/templates/microsoft.compute/virtualmachines): Create a cloud-based virtual machine. In this template, the virtual machine is configured as a Data Science Virtual Machine running Ubuntu.
## Deploy the Bicep file
machine-learning Dsvm Tutorial Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tutorial-resource-manager.md
# Quickstart: Create an Ubuntu Data Science Virtual Machine using an ARM template
-This quickstart will show you how to create an Ubuntu 18.04 Data Science Virtual Machine using an Azure Resource Manager template (ARM template). Data Science Virtual Machines are cloud-based virtual machines preloaded with a suite of data science and machine learning frameworks and tools. When deployed on GPU-powered compute resources, all tools and libraries are configured to use the GPU.
+This quickstart will show you how to create an Ubuntu Data Science Virtual Machine using an Azure Resource Manager template (ARM template). Data Science Virtual Machines are cloud-based virtual machines preloaded with a suite of data science and machine learning frameworks and tools. When deployed on GPU-powered compute resources, all tools and libraries are configured to use the GPU.
[!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)]
The following resources are defined in the template:
* [Microsoft.Network/virtualNetworks](/azure/templates/microsoft.network/virtualnetworks) * [Microsoft.Network/publicIPAddresses](/azure/templates/microsoft.network/publicipaddresses) * [Microsoft.Storage/storageAccounts](/azure/templates/microsoft.storage/storageaccounts)
-* [Microsoft.Compute/virtualMachines](/azure/templates/microsoft.compute/virtualmachines): Create a cloud-based virtual machine. In this template, the virtual machine is configured as a Data Science Virtual Machine running Ubuntu 18.04.
+* [Microsoft.Compute/virtualMachines](/azure/templates/microsoft.compute/virtualmachines): Create a cloud-based virtual machine. In this template, the virtual machine is configured as a Data Science Virtual Machine running Ubuntu.
## Deploy the template
machine-learning Dsvm Ubuntu Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro.md
# Quickstart: Set up the Data Science Virtual Machine for Linux (Ubuntu)
-Get up and running with the Ubuntu 18.04 and Ubuntu 20.04 Data Science Virtual Machines.
+> [!IMPORTANT]
+> Items marked (preview) in this article are currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Get up and running with the Ubuntu 20.04 Data Science Virtual Machine and Azure DSVM for PyTorch (preview).
## Prerequisites
-To create an Ubuntu 18.04 or Ubuntu 20.04 Data Science Virtual Machine, you must have an Azure subscription. [Try Azure for free](https://azure.com/free).
+To create an Ubuntu 20.04 Data Science Virtual Machine or an Azure DSVM for PyTorch, you must have an Azure subscription. [Try Azure for free](https://azure.com/free).
>[!NOTE] >Azure free accounts don't support GPU enabled virtual machine SKUs. ## Create your Data Science Virtual Machine for Linux
-Here are the steps to create an instance of the Data Science Virtual Machine from Ubuntu 18.04 or Ubuntu 20.04:
+Here are the steps to create an instance of the Ubuntu 20.04 Data Science Virtual Machine or the Azure DSVM for PyTorch:
1. Go to the [Azure portal](https://portal.azure.com). You might be prompted to sign in to your Azure account if you're not already signed in.
-1. Find the virtual machine listing by typing in "data science virtual machine" and selecting "Data Science Virtual Machine- Ubuntu 18.04" or "Data Science Virtual Machine- Ubuntu 20.04"
+1. Find the virtual machine listing by typing in "data science virtual machine" and selecting "Data Science Virtual Machine- Ubuntu 20.04" or "Azure DSVM for PyTorch (preview)"
1. On the next window, select **Create**.
You can access the Ubuntu DSVM in one of three ways:
### SSH
-If you configured your VM with SSH authentication, you can logon using the account credentials that you created in the **Basics** section of step 3 for the text shell interface. On Windows, you can download an SSH client tool like [PuTTY](https://www.putty.org). If you prefer a graphical desktop (X Window System), you can use X11 forwarding on PuTTY.
+If you configured your VM with SSH authentication, you can log on using the account credentials that you created in the **Basics** section of step 3 for the text shell interface. On Windows, you can download an SSH client tool like [PuTTY](https://www.putty.org). If you prefer a graphical desktop (X Window System), you can use X11 forwarding on PuTTY.
> [!NOTE] > The X2Go client performed better than X11 forwarding in testing. We recommend using the X2Go client for a graphical desktop interface.
machine-learning Linux Dsvm Walkthrough https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/linux-dsvm-walkthrough.md
Before you can use a Linux DSVM, you must have the following prerequisites:
The [spambase](https://archive.ics.uci.edu/ml/datasets/spambase) dataset is a relatively small set of data that contains 4,601 examples. The dataset is a convenient size for demonstrating some of the key features of the DSVM because it keeps the resource requirements modest. > [!NOTE]
-> This walkthrough was created by using a D2 v2-size Linux DSVM (Ubuntu 18.04 Edition). You can use a DSVM this size to complete the procedures that are demonstrated in this walkthrough.
+> This walkthrough was created by using a D2 v2-size Linux DSVM. You can use a DSVM this size to complete the procedures that are demonstrated in this walkthrough.
If you need more storage space, you can create additional disks and attach them to your DSVM. The disks use persistent Azure storage, so their data is preserved even if the server is reprovisioned due to resizing or is shut down. To add a disk and attach it to your DSVM, complete the steps in [Add a disk to a Linux VM](../../virtual-machines/linux/add-disk.md?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json). The steps for adding a disk use the Azure CLI, which is already installed on the DSVM. You can complete the steps entirely from the DSVM itself. Another option to increase storage is to use [Azure Files](../../storage/files/storage-how-to-use-files-linux.md).
machine-learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/overview.md
The Data Science Virtual Machine (DSVM) is a customized VM image on the Azure cl
The DSVM is available on: + Windows Server 2019
-+ Ubuntu 18.04 LTS
+ Ubuntu 20.04 LTS Additionally, we are excited to offer Azure DSVM for PyTorch (preview), which is an Ubuntu 20.04 image from Azure Marketplace that is optimized for large, distributed deep learning workloads. It comes preinstalled and validated with the latest PyTorch version to reduce setup costs and accelerate time to value. It comes packaged with various optimization functionalities (ONNX Runtime​, DeepSpeed​, MSCCL​, ORTMoE​, Fairscale​, Nvidia Apex​), as well as an up-to-date stack with the latest compatible versions of Ubuntu, Python, PyTorch, CUDA.
machine-learning Tools Included https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/tools-included.md
The Data Science Virtual Machine is an easy way to explore data and do machine l
The Data Science Virtual Machine comes with the most useful data-science tools pre-installed.
+> [!IMPORTANT]
+> Items marked (preview) in this article are currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+ ## Build deep learning and machine learning solutions
-| Tool | Windows Server 2019 DSVM | Ubuntu 18.04 DSVM | Ubuntu 20.04 DSVM | Usage notes |
+| Tool | Windows Server 2019 DSVM | Ubuntu 20.04 DSVM | Azure DSVM for PyTorch (preview) | Usage notes |
|--|:-:|:-:|:-:|:-:| | [CUDA, cuDNN, NVIDIA Driver](https://developer.nvidia.com/cuda-toolkit) | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | <span class='green-check'>&#9989;</span></br> | [CUDA, cuDNN, NVIDIA Driver on the DSVM](./dsvm-tools-deep-learning-frameworks.md#cuda-cudnn-nvidia-driver) | | [Horovod](https://github.com/horovod/horovod) | <span class='red-x'>&#10060;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span></br> | [Horovod on the DSVM](./dsvm-tools-deep-learning-frameworks.md#horovod) |
The Data Science Virtual Machine comes with the most useful data-science tools p
## Store, retrieve, and manipulate data
-| Tool | Windows Server 2019 DSVM | Ubuntu 18.04 DSVM | Ubuntu 20.04 DSVM | Usage notes |
+| Tool | Windows Server 2019 DSVM | Ubuntu 20.04 DSVM | Azure DSVM for PyTorch (preview) | Usage notes |
|--|-:|:-:|:-:|:-:| | Relational databases | [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) <br/> Developer Edition | [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) <br/> Developer Edition | [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) <br/> Developer Edition | [SQL Server on the DSVM](./dsvm-tools-data-platforms.md#sql-server-developer-edition) | | Database tools | SQL Server Management Studio<br/> SQL Server Integration Services<br/> [bcp, sqlcmd](/sql/tools/command-prompt-utility-reference-database-engine) | [SQuirreL SQL](http://squirrel-sql.sourceforge.net/) (querying tool), <br /> bcp, sqlcmd <br /> ODBC/JDBC drivers | [SQuirreL SQL](http://squirrel-sql.sourceforge.net/) (querying tool), <br /> bcp, sqlcmd <br /> ODBC/JDBC drivers | |
The Data Science Virtual Machine comes with the most useful data-science tools p
## Program in Python, R, Julia, and Node.js
-| Tool | Windows Server 2019 DSVM | Ubuntu 18.04 DSVM | Ubuntu 20.04 DSVM | Usage notes |
+| Tool | Windows Server 2019 DSVM | Ubuntu 20.04 DSVM | Azure DSVM for PyTorch (preview) | Usage notes |
|--|:-:|:-:|:-:|:-:| | [CRAN-R](https://cran.r-project.org/) with popular packages pre-installed | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | <span class='green-check'>&#9989;</span> | | | [Anaconda Python](https://www.continuum.io/) with popular packages pre-installed | <span class='green-check'>&#9989;</span><br/> (Miniconda) | <span class='green-check'>&#9989;</span></br> (Miniconda) | <span class='green-check'>&#9989;</span></br> (Miniconda) | |
The Data Science Virtual Machine comes with the most useful data-science tools p
| &nbsp;&nbsp;&nbsp;&nbsp; Julia | | | | [Julia Jupyter Samples](./dsvm-samples-and-walkthroughs.md#julia-language) | | &nbsp;&nbsp;&nbsp;&nbsp; PySpark | | | | [pySpark Jupyter Samples](./dsvm-samples-and-walkthroughs.md#sparkml) |
-**Ubuntu 18.04 DSVM, 20.04 DSVM and Windows Server 2019 DSVM** has the following Jupyter Kernels:-</br>
+**Ubuntu 20.04 DSVM, Azure DSVM for PyTorch (preview) and Windows Server 2019 DSVM** have the following Jupyter Kernels:-</br>
* Python3.8-default</br> * Python3.8-Tensorflow-Pytorch</br> * Python3.8-AzureML</br>
The Data Science Virtual Machine comes with the most useful data-science tools p
* Scala Spark ΓÇô HDInsight</br> * Python 3 Spark ΓÇô HDInsight</br>
-**Ubuntu 18.04 DSVM, 20.04 DSVM and Windows Server 2019 DSVM** has the following conda environments:-</br>
+**Ubuntu 20.04 DSVM, Azure DSVM for PyTorch (preview) and Windows Server 2019 DSVM** have the following conda environments:-</br>
* Python3.8-defaultΓÇ» </br> * Python3.8-Tensorflow-PytorchΓÇ»</br> * Python3.8-AzureMLΓÇ» </br> ## Use your preferred editor or IDE
-| Tool | Windows Server 2019 DSVM | Ubuntu 18.04 DSVM | Ubuntu 20.04 DSVM | Usage notes |
+| Tool | Windows Server 2019 DSVM | Ubuntu 20.04 DSVM | Azure DSVM for PyTorch (preview) | Usage notes |
|--|:-:|:-:|:-:|:-:| | [Notepad++](https://notepad-plus-plus.org/) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span></br> | <span class='red-x'>&#10060;</span></br> | | | [Nano](https://www.nano-editor.org/) | <span class='green-check'>&#9989;</span></br> | <span class='red-x'>&#10060;</span></br> | <span class='red-x'>&#10060;</span></br> | |
The Data Science Virtual Machine comes with the most useful data-science tools p
## Organize & present results
-| Tool | Windows Server 2019 DSVM | Ubuntu 18.04 DSVM | Ubuntu 20.04 DSVM | Usage notes |
+| Tool | Windows Server 2019 DSVM | Ubuntu 20.04 DSVM | Azure DSVM for PyTorch (preview) | Usage notes |
|--|:-:|:-:|:-:|:-:| | [Microsoft 365](https://www.microsoft.com/microsoft-365) (Word, Excel, PowerPoint) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | | | [Microsoft Teams](https://www.microsoft.com/microsoft-teams) | <span class='green-check'>&#9989;</span> | <span class='red-x'>&#10060;</span> | <span class='red-x'>&#10060;</span> | |
machine-learning Ubuntu Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/ubuntu-upgrade.md
Title: How to upgrade your Data Science Virtual Machine to Ubuntu 18.04
+ Title: How to upgrade your Data Science Virtual Machine to Ubuntu 20.04
-description: Learn how to upgrade from CentOS and Ubuntu 16.04 to the latest Ubuntu 18.04 Data Science Virtual Machine.
+description: Learn how to upgrade from CentOS and Ubuntu 18.04 to the latest Ubuntu 20.04 Data Science Virtual Machine.
keywords: deep learning, AI, data science tools, data science virtual machine, team data science process --++ Previously updated : 10/07/2020 Last updated : 10/04/2022
-# Upgrade your Data Science Virtual Machine to Ubuntu 18.04
+# Upgrade your Data Science Virtual Machine to Ubuntu 20.04
-If you have a Data Science Virtual Machine running an older release such as Ubuntu 16.04 or CentOS, you should migrate your DSVM to Ubuntu 18.04. Migrating will ensure that you get the latest operating system patches, drivers, preinstalled software, and library versions. This document tells you how to migrate from either older versions of Ubuntu or from CentOS.
+If you have a Data Science Virtual Machine running an older release such as Ubuntu 18.04 or CentOS, you should migrate your DSVM to Ubuntu 20.04. Migrating will ensure that you get the latest operating system patches, drivers, preinstalled software, and library versions. This document tells you how to migrate from either older versions of Ubuntu or from CentOS.
## Prerequisites
If you have a Data Science Virtual Machine running an older release such as Ubun
There are two possible ways to migrate: -- In-place migration, also called "same server" migration. This migration upgrades the existing VM without creating a new virtual machine. In-place migration is the easier way to migrate from Ubuntu 16.04 to Ubuntu 18.04.-- Side-by-side migration, also called "inter-server" migration. This migration transfers data from the existing virtual machine to a newly created VM. Side-by-side migration is the way to migrate from Centos to Ubuntu 18.04. You may prefer side-by-side migration for upgrading between Ubuntu versions if you feel your old install has become needlessly cluttered.
+- In-place migration, also called "same server" migration. This migration upgrades the existing VM without creating a new virtual machine. In-place migration is the easier way to migrate from Ubuntu 18.04 to Ubuntu 20.04.
+- Side-by-side migration, also called "inter-server" migration. This migration transfers data from the existing virtual machine to a newly created VM. Side-by-side migration is the way to migrate from Centos to Ubuntu 20.04. You may prefer side-by-side migration for upgrading between Ubuntu versions if you feel your old install has become needlessly cluttered.
## Snapshot your VM in case you need to roll back
Whether you did an in-place or side-by-side migration, confirm that you've succe
cat /etc/os-release ```
-And you should see that you're running Ubuntu 18.04.
+And you should see that you're running Ubuntu 20.04.
:::image type="content" source="media/ubuntu_upgrade/ssh-os-release.png" alt-text="Screenshot of Ubuntu terminal showing OS version data":::
The change of version is also shown in the Azure portal.
## Next steps - [Data science with an Ubuntu Data Science Machine in Azure](./linux-dsvm-walkthrough.md)-- [What tools are included on the Azure Data Science Virtual Machine?](./tools-included.md)
+- [What tools are included on the Azure Data Science Virtual Machine?](./tools-included.md)
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
https://github.com/Azure/azureml-examples/blob/v2samplesreorg/sdk/python/jobs/au
+## Known Issues
+
+Dealing with very low scores, or higher loss values:
+
+For certain datasets, regardless of the NLP task, the scores produced may be very low, sometimes even zero. This would be accompanied by higher loss values implying that the neural network failed to converge. This can happen more frequently on certain GPU SKUs.
+
+While such cases are uncommon, they're possible and the best way to handle it is to leverage hyperparameter tuning and provide a wider range of values, especially for hyperparameters like learning rates. Until our hyperparameter tuning capability is available in production we recommend users, who face such issues, to leverage the NC6 or ND6 compute clusters, where we've found training outcomes to be fairly stable.
+ ## Next steps + [Deploy AutoML models to an online (real-time inference) endpoint](how-to-deploy-automl-endpoint.md)
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
You can use Azure Machine Learning compute cluster to distribute a training or b
In this article, learn how to: * Create a compute cluster
-* Lower your compute cluster cost
+* Lower your compute cluster cost with low priority VMs
* Set up a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) for the cluster ## Prerequisites
SSH access is disabled by default. SSH access can't be changed after creation.
- ## Lower your compute cluster cost
+ ## Lower your compute cluster cost with low priority VMs
You may also choose to use [low-priority VMs](how-to-manage-optimize-cost.md#low-pri-vm) to run some or all of your workloads. These VMs don't have guaranteed availability and may be preempted while in use. You'll have to restart a preempted job.
+Using Azure Low Priority Virtual Machines allows you to take advantage of Azure's unused capacity at a significant cost savings. At any point in time when Azure needs the capacity back, the Azure infrastructure will evict Azure Low Priority Virtual Machines. Therefore, Azure Low Priority Virtual Machines are great for workloads that can handle interruptions. The amount of available capacity can vary based on size, region, time of day, and more. When deploying Azure Low Priority Virtual Machines, Azure will allocate the VMs if there's capacity available, but there's no SLA for these VMs. An Azure Low Priority Virtual Machine offers no high availability guarantees. At any point in time when Azure needs the capacity back, the Azure infrastructure will evict Azure Low Priority Virtual Machines
+ Use any of these ways to specify a low-priority VM: # [Python SDK](#tab/python)
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-triton.md
ms.devlang: azurecli
Learn how to use [NVIDIA Triton Inference Server](https://aka.ms/nvidia-triton-docs) in Azure Machine Learning with [online endpoints](concept-endpoints.md#what-are-online-endpoints).
-Triton is multi-framework, open-source software that is optimized for inference. It supports popular machine learning frameworks like TensorFlow, ONNX Runtime, PyTorch, NVIDIA TensorRT, and more. It can be used for your CPU or GPU workloads. No-code deployment for Triton models are supported in both [managed online endpoints and Kubernetes online endpoints](concept-endpoints.md#managed-online-endpoints-vs-kubernetes-online-endpoints).
+Triton is multi-framework, open-source software that is optimized for inference. It supports popular machine learning frameworks like TensorFlow, ONNX Runtime, PyTorch, NVIDIA TensorRT, and more. It can be used for your CPU or GPU workloads. No-code deployment for Triton models is supported in both [managed online endpoints and Kubernetes online endpoints](concept-endpoints.md#managed-online-endpoints-vs-kubernetes-online-endpoints).
In this article, you will learn how to deploy Triton and a model to a [managed online endpoint](concept-endpoints.md#managed-online-endpoints). Information is provided on using the CLI (command line), Python SDK v2, and Azure Machine Learning studio.
The information in this document is based on using a model stored in ONNX format
> [!IMPORTANT] > You may need to request a quota increase for your subscription before you can use this series of VMs. For more information, see [NCv3-series](../virtual-machines/ncv3-series.md).
-The information in this article is based on the [Deploy a model to online endpoints using Triton](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/triton/single-model/online-endpoints-triton.ipynb) notebook contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste files, clone the repo and then change directories to the `sdk/endpoints/online/triton/single-model/online-endpoints-triton.ipynb` directory in the repo:
+The information in this article is based on the [online-endpoints-triton.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/triton/single-model/online-endpoints-triton.ipynb) notebook contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste files, clone the repo, and then change directories to the `sdk/endpoints/online/triton/single-model/` directory in the repo:
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1
-cd azureml-examples
-cd sdk/endpoints/online/triton/single-model/online-endpoints-triton.ipynb
+cd azureml-examples/sdk/python/endpoints/online/triton/single-model/
``` # [Studio](#tab/azure-studio) * An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
-* An Azure Machine Learning workspace. If you don't have one, use the steps in [Manage Azure Machine Learning workspaces in the portal or with the Python SDK](how-to-manage-workspace.md) to create one.
+* An Azure Machine Learning workspace. If you don't have one, use the steps in [Manage Azure Machine Learning workspaces in the portal, or with the Python SDK](how-to-manage-workspace.md) to create one.
Once your deployment completes, use the following command to make a scoring requ
# [Studio](#tab/azure-studio)
-Azure Machine Learning Studio provides the ability to test endpoints with JSON. However, serialized JSON is not currently included for this example.
+Azure Machine Learning studio provides the ability to test endpoints with JSON. However, serialized JSON is not currently included for this example.
-To test an endpoint using Azure Machine Learning Studio, click `Test` from the Endpoint page.
+To test an endpoint using Azure Machine Learning studio, click `Test` from the Endpoint page.
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-tensorflow.md
In this article, learn how to run your [TensorFlow](https://www.tensorflow.org/overview) training scripts at scale using Azure Machine Learning Python SDK v2.
-This example code in this article train a TensorFlow model to classify handwritten digits using a deep neural network (DNN), register the model, and deploy it to an online endpoint.
+The example code in this article train a TensorFlow model to classify handwritten digits, using a deep neural network (DNN); register the model; and deploy it to an online endpoint.
Whether you're developing a TensorFlow model from the ground-up or you're bringing an existing model into the cloud, you can use Azure Machine Learning to scale out open-source training jobs using elastic cloud compute resources. You can build, deploy, version, and monitor production-grade models with Azure Machine Learning.
Whether you're developing a TensorFlow model from the ground-up or you're bringi
To benefit from this article, you'll need to: - Access an Azure subscription. If you don't have one already, [create a free account](https://azure.microsoft.com/free/).-- Run the code in this article using either an Azure Machine Learning compute instance, or your own Jupyter notebook.
+- Run the code in this article using either an Azure Machine Learning compute instance or your own Jupyter notebook.
- Azure Machine Learning compute instance - no downloads or installation necessary - Complete the [Quickstart: Get started with Azure Machine Learning](quickstart-create-resources.md) to create a dedicated notebook server pre-loaded with the SDK and the sample repository. - In the samples deep learning folder on the notebook server, find a completed and expanded notebook by navigating to this directory: **v2 > sdk > jobs > single-step > tensorflow > train-hyperparameter-tune-deploy-with-tensorflow**.
If `DefaultAzureCredential` doesn't work for you, see [`azure-identity reference
[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb?name=credential)]
-If you prefer to use a browser to sign in and authenticate, you should remove the comments in the following code and use it instead.
+If you prefer to use a browser to sign in and authenticate, you should uncomment the following code and use it instead.
```python # Handle to the workspace
If you prefer to use a browser to sign in and authenticate, you should remove th
Next, get a handle to the workspace by providing your Subscription ID, Resource Group name, and workspace name. To find these parameters:
-1. Look in the upper-right corner of the Azure Machine Learning studio toolbar for your workspace name.
+1. Look for your workspace name in the upper-right corner of the Azure Machine Learning studio toolbar.
2. Select your workspace name to show your Resource Group and Subscription ID. 3. Copy the values for Resource Group and Subscription ID into the code.
The result of running this script is a workspace handle that you'll use to manag
AzureML needs a compute resource to run a job. This resource can be single or multi-node machines with Linux or Windows OS, or a specific compute fabric like Spark.
-In the following example script, we provision a Linux [`compute cluster`](/azure/machine-learning/how-to-create-attach-compute-cluster?tabs=python). You can see the [`Azure Machine Learning pricing`](https://azure.microsoft.com/pricing/details/machine-learning/) page for the full list of VM sizes and prices. Since we need a GPU cluster for this example, let's pick a *STANDARD_NC6* model and create an Azure ML compute.
+In the following example script, we provision a Linux [`compute cluster`](/azure/machine-learning/how-to-create-attach-compute-cluster?tabs=python). You can see the [`Azure Machine Learning pricing`](https://azure.microsoft.com/pricing/details/machine-learning/) page for the full list of VM sizes and prices. Since we need a GPU cluster for this example, let's pick a *STANDARD_NC6* model and create an AzureML compute.
[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb?name=cpu_compute_target)]
During the pipeline run, you'll use MLFlow to log the parameters and metrics. To
In the training script `tf_mnist.py`, we create a simple deep neural network (DNN). This DNN has: -- An input layer with 28 * 28 = 784 neurons. Each neuron represents an image pixel;-- Two hidden layers. The first hidden layer has 300 neurons and the second hidden layer has 100 neurons; and
+- An input layer with 28 * 28 = 784 neurons. Each neuron represents an image pixel.
+- Two hidden layers. The first hidden layer has 300 neurons and the second hidden layer has 100 neurons.
- An output layer with 10 neurons. Each neuron represents a targeted label from 0 to 9. :::image type="content" source="media/how-to-train-tensorflow/neural-network.png" alt-text="Diagram showing a deep neural network with 784 neurons at the input layer, two hidden layers, and 10 neurons at the output layer.":::
You'll use the general purpose `command` to run the training script and perform
[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb?name=job)] -- The inputs for this command include the data location, batch size, number of neurons in the first and second layer, and learning rate.
- - We've passed in the web path directly as an input.
+- The inputs for this command include the data location, batch size, number of neurons in the first and second layer, and learning rate. Notice that we've passed in the web path directly as an input.
- For the parameter values: - provide the compute cluster `gpu_compute_target = "gpu-cluster"` that you created for running this command;
You'll use the general purpose `command` to run the training script and perform
### Submit the job
-It's now time to submit the job to run in AzureML. This time you'll use `create_or_update` on `ml_client.jobs`.
+It's now time to submit the job to run in AzureML. This time, you'll use `create_or_update` on `ml_client.jobs`.
[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb?name=create_job)]
As the job is executed, it goes through the following stages:
- **Preparing**: A docker image is created according to the environment defined. The image is uploaded to the workspace's container registry and cached for later runs. Logs are also streamed to the run history and can be viewed to monitor progress. If a curated environment is specified, the cached image backing that curated environment will be used. -- **Scaling**: The cluster attempts to scale up if the cluster requires more nodes to execute the run than are currently available.
+- **Scaling**: The cluster attempts to scale up if it requires more nodes to execute the run than are currently available.
- **Running**: All scripts in the script folder *src* are uploaded to the compute target, data stores are mounted or copied, and the script is executed. Outputs from *stdout* and the *./logs* folder are streamed to the run history and can be used to monitor the run.
For more information about deployment, see [Deploy and score a machine learning
### Create a new online endpoint
-As a first step, you need to create your online endpoint. The endpoint name must be unique in the entire Azure region. For this article, you'll create a unique name using a universally unique identifier (UUID).
+As a first step to deploying your model, you need to create your online endpoint. The endpoint name must be unique in the entire Azure region. For this article, you'll create a unique name using a universally unique identifier (UUID).
[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb?name=online_endpoint_name)]
Once you've created the endpoint, you can retrieve it as follows:
### Deploy the model to the endpoint
-After you've created the endpoint, you can deploy the model with the entry script. An endpoint can have multiple deployments. The endpoint can then direct traffic to these deployments, using rules.
+After you've created the endpoint, you can deploy the model with the entry script. An endpoint can have multiple deployments. Using rules, the endpoint can then direct traffic to these deployments.
In the following code, you'll create a single deployment that handles 100% of the incoming traffic. We've specified an arbitrary color name (*tff-blue*) for the deployment. You could also use any other name such as *tff-green* or *tff-red* for the deployment. The code to deploy the model to the endpoint does the following: -- Deploys the best version of the model that you registered earlier;-- Scores the model, using the `core.py` file; and-- Uses the same curated environment (that you declared earlier) to perform inferencing.
+- deploys the best version of the model that you registered earlier;
+- scores the model, using the `core.py` file; and
+- uses the same curated environment (that you declared earlier) to perform inferencing.
[!notebook-python[](~/azureml-examples-v2samplesreorg/sdk/python/jobs/single-step/tensorflow/train-hyperparameter-tune-deploy-with-tensorflow/train-hyperparameter-tune-deploy-with-tensorflow.ipynb?name=blue_deployment)]
machine-learning Migrate To V2 Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-managed-online-endpoints.md
Title: Upgrade steps for Container Instances web services to managed online endpoints
+ Title: Upgrade steps for Azure Container Instances web services to managed online endpoints
description: Upgrade steps for Azure Container Instances web services to managed online endpoints in Azure Machine Learning
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-access-data.md
For situations where the SDK doesn't provide access to datastores, you might be
## Move data to supported Azure storage solutions
-Azure Machine Learning supports accessing data from Azure Blob storage, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, and Azure Database for PostgreSQL. If you're using unsupported storage, we recommend that you move your data to supported Azure storage solutions by using [Azure Data Factory and these steps](../../data-factory/quickstart-create-data-factory-copy-data-tool.md). Moving data to supported storage can help you save data egress costs during machine learning experiments.
+Azure Machine Learning supports accessing data from Azure Blob storage, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, and Azure Database for PostgreSQL. If you're using unsupported storage, we recommend that you move your data to supported Azure storage solutions by using [Azure Data Factory and these steps](../../data-factory/quickstart-hello-world-copy-data-tool.md). Moving data to supported storage can help you save data egress costs during machine learning experiments.
Azure Data Factory provides efficient and resilient data transfer with more than 80 prebuilt connectors at no extra cost. These connectors include Azure data services, on-premises data sources, Amazon S3 and Redshift, and Google BigQuery.
marketplace Azure Container Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-offer-listing.md
Provide a **Short description** of your offer, up to 256 characters. This will a
[!INCLUDE [Long description-2](includes/long-description-2.md)]
+> [!IMPORTANT]
+> If you plan to publish a Kubernetes application-based offer, ensure your offer is categorized correctly and discoverable by customers by adding the term `KubernetesApps` to your description.
+ Use HTML tags to format your description so it's more engaging. For a list of allowed tags, see [Supported HTML tags](supported-html-tags.md). Enter the web address (URL) of your organization's privacy policy. Ensure your offer complies with privacy laws and regulations. You must also post a valid privacy policy on your website.
marketplace Azure Container Plan Technical Configuration Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-plan-technical-configuration-kubernetes.md
For more, see [Prepare Azure Kubernetes technical assets](azure-container-techni
## Setting cluster extension type name
-Cluster extensions enable an Azure Resource Manager driven experience for your application. The following limitations apply when setting the cluster extension type name value:
+[Cluster extensions][cluster-extensions] enable an Azure Resource Manager driven experience for your application. The following limitations apply when setting the cluster extension type name value:
- You must provide the cluster extension type name in the format of 'PublisherName.ApplicationName'.
Select *Add CNAB Bundle* to select the payload reference like so:
## Next steps - To **Co-sell with Microsoft** (optional), select it in the left-nav menu. For details, see [Co-sell partner engagement](/partner-center/co-sell-overview?context=/azure/marketplace/context/context).-- If you're not setting up either of these or you've finished, it's time to [Review and publish your offer](review-publish-offer.md).
+- If you're not setting up either of these or you've finished, it's time to [Review and publish your offer](review-publish-offer.md).
+
+<!-- LINKS -->
+[cluster-extensions]: ../aks/cluster-extensions.md
marketplace Azure Container Technical Assets Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-technical-assets-kubernetes.md
In addition to your solution domain, your engineering team should have knowledge
- Working knowledge of [JSON](https://www.json.org/) - Working knowledge of [Helm](https://www.helm.sh) - Working knowledge of [createUiDefinition][createuidefinition]
+- Working knowledge of [Azure Resource Manager (ARM) templates][arm-template-overview]
## Prerequisites
As part of the publishing process, Microsoft will deep copy your CNAB from your
Microsoft has created a first-party application responsible for handling this process with an `id` of `32597670-3e15-4def-8851-614ff48c1efa`. To begin, create a service principal based off of the application: + # [Linux](#tab/linux)
+> [!NOTE]
+> If your account doesn't have permission to create a service principal, `az ad sp create` will return an error message containing "Insufficient privileges to complete the operation". Contact your Azure Active Directory admin to create a service principal.
++ ```azurecli-interactive az login az ad sp create --id 32597670-3e15-4def-8851-614ff48c1efa
Your output should look similar to the following:
... ```
-Finally, create a role assignment to grant the service principal the ability to pull from your registry using the values you obtained earlier:
+Next, create a role assignment to grant the service principal the ability to pull from your registry using the values you obtained earlier:
+ ```azurecli-interactive az role assignment create --assignee <sp-id> --scope <registry-id> --role acrpull
az provider show -n Microsoft.PartnerCenterIngestion --subscription <subscriptio
# [Windows](#tab/windows)
+> [!NOTE]
+> If your account doesn't have permission to create a service principal, `New-AzADServicePrincipal` will return an error message containing "Insufficient privileges to complete the operation". Contact your Azure Active Directory admin to create a service principal.
+ ```powershell-interactive Connect-AzAccount New-AzADServicePrincipal -ApplicationId 32597670-3e15-4def-8851-614ff48c1efa
Your output should look similar to the following:
Next, create a role assignment to grant the service principal the ability to pull from your registry: + ```powershell-interactive New-AzRoleAssignment -ObjectId <sp-id> -Role acrpull -Scope <registry-id> ```
Ensure the Helm chart adheres to the following rules:
- All image names and references are parameterized and represented in `values.yaml` as global.azure.images references. Update `deployment.yaml` to point these images. This ensures the image block can be updated and referenced by Azure Marketplace's ACR.
- :::image type="content" source="./media/azure-container/billing-identifier.png" alt-text="A screenshot of a properly formatted values.yaml file is shown. It resembles the sample values.yaml file linked from this article.":::
+ :::image type="content" source="./media/azure-container/image-references.png" alt-text="A screenshot of a properly formatted deployment.yaml file is shown. The parameterized image references are shown, resembling the content in the sample deployment.yaml file linked in this article.":::
- If you have any subcharts, extract the content under charts and update each of your dependent image references to point to the images included in the main chart's `values.yaml`. - Images must use digests instead of tags. This ensures CNAB building is deterministic.
+
+ :::image type="content" source="./media/azure-container/billing-identifier.png" alt-text="A screenshot of a properly formatted values.yaml file is shown. The images are using digests. The content resembles the sample values.yaml file linked in this article.":::
### Make updates based on your billing model
After reviewing the billing models available, select one appropriate for your us
- Add a billing identifier label and cpu cores request to your `deployment.yaml` file.
+ :::image type="content" source="./media/azure-container/billing-identifier-label.png" alt-text="A screenshot of a properly formatted billing identifier label in a deployment.yaml file. The content resembles the sample depoyment.yaml file linked in this article":::
+
+ :::image type="content" source="./media/azure-container/resources.png" alt-text="A screenshot of CPU resource requests in a deployment.yaml file. The content resembles the sample depoyment.yaml file linked in this article.":::
+ - Add a billing identifier value for `global.azure.billingidentifier` in `values.yaml`.
+ :::image type="content" source="./media/azure-container/billing-identifier-value.png" alt-text="A screenshot of a properly formatted values.yaml file, showing the global > azure > billingIdentifier field.":::
+ Note that at deployment time, the cluster extensions feature will replace the billing identifier value with the extension type name you provide while setting up plan details. For examples configured to deploy the [Azure Voting App][azure-voting-app], see the following:
For an example of how to integrate `container-package-app` into an Azure Pipelin
[cnab]: https://cnab.io/ [cluster-extensions]: ../aks/cluster-extensions.md
-[azure-voting-app]: https://github.com/Azure-Samples/azure-voting-app-redis
+[azure-voting-app]: https://github.com/Azure-Samples/kubernetes-offer-samples/tree/main/samples/k8s-offer-azure-vote/azure-vote
[createuidefinition]: ../azure-resource-manager/managed-applications/create-uidefinition-overview.md [sandbox-environment]: https://ms.portal.azure.com/#view/Microsoft_Azure_CreateUIDef/SandboxBlade [arm-template-overview]: ../azure-resource-manager/templates/overview.md
marketplace Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-account.md
To create an account in the commercial marketplace program in Partner Center, ma
### Create a Partner Center account and enroll in the commercial marketplace
-Use this method if you're new to Partner Center and are not enrolled in the Microsoft Partner Network. Complete the steps in this section to create a new Partner Center account and publisher profile.
+Use this method if you're new to Partner Center and are not enrolled in the Microsoft Cloud Partner Program. Complete the steps in this section to create a new Partner Center account and publisher profile.
#### Register on the Partner Center enrollment page
Sign in with a work account so that you can link your company's work email accou
#### Agree to the terms and conditions
-As part of the commercial marketplace registration process, you need to agree to the terms and conditions in the [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement). If youΓÇÖre new to Microsoft Partner Network, you also need to agree to the terms and conditions in the Microsoft Cloud Partner Program Agreement.
+As part of the commercial marketplace registration process, you need to agree to the terms and conditions in the [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement). If youΓÇÖre new to Microsoft Cloud Partner Program, you also need to agree to the terms and conditions in the Microsoft Cloud Partner Program Agreement.
You've now created a commercial marketplace account in Partner Center. Continue to [Add new publishers to the commercial marketplace](add-publishers.md).
You've now created a commercial marketplace account in Partner Center. Continue
Follow the instructions in this section to create a commercial marketplace account if you already have an enrollment in Microsoft Partner Center. There are two types of existing enrollments that you can use to set up your commercial marketplace account. Choose the scenario that applies to you:
-*What if I'm already enrolled in the Microsoft Partner Network?*
+*What if I'm already enrolled in the Microsoft Cloud Partner Program?*
- [Use an existing Microsoft Cloud Partner Program account](#use-an-existing-microsoft-cloud-partner-program-account) to create your account. *What if I'm already enrolled in a developer program?*
You can then assign the appropriate user roles and permissions to your users, so
1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165507) with your Microsoft Cloud Partner Program account. >[!NOTE]
- > You must have an **account admin** or a **global admin** role to sign in to Microsoft Partner Network.
+ > You must have an **account admin** or a **global admin** role to sign in to Microsoft Cloud Partner Program.
1. In the top-right, select **Settings** > **Account settings**. Then in the left menu, select **Programs**.
marketplace Isv App License Power Bi Visual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/isv-app-license-power-bi-visual.md
ISV app license management currently supports:
To manage your ISV app licenses, you need to comply with the following pre-requisites.
-1. Have a valid [Microsoft Partner Network account](/partner-center/mpn-create-a-partner-center-account).
+1. Have a valid [Microsoft Cloud Partner Program account](/partner-center/mpn-create-a-partner-center-account).
1. Be signed up for commercial marketplace program. For more information, see [Create a commercial marketplace account in Partner Center](create-account.md). 1. Your developer team has the development environments and tools required to create Power BI visuals solutions. See [Develop your own Power BI visual and Tutorial: Develop a Power BI circle card visual](/power-bi/developer/visuals/develop-power-bi-visuals).
marketplace Isv App License https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/isv-app-license.md
ISV app license management currently supports:
To manage your ISV app licenses, you need to comply with the following pre-requisites.
-1. Have a valid [Microsoft Partner Network account](/partner-center/mpn-create-a-partner-center-account).
+1. Have a valid [Microsoft Cloud Partner Program account](/partner-center/mpn-create-a-partner-center-account).
1. Be signed up for commercial marketplace program. For more information, see [Create a commercial marketplace account in Partner Center](create-account.md). 1. Be signed up for the [ISV Connect program](https://partner.microsoft.com/solutions/business-applications/isv-overview). For more information, see [Microsoft Business Applications Independent Software Vendor (ISV) Connect Program onboarding guide](business-applications-isv-program.md). 1. Your developer team has the development environments and tools required to create Dataverse solutions. Your Dataverse solution must include model-driven applications (currently these are the only type of solution components that are supported through the license management feature).
marketplace Monetize Addins Through Microsoft Commercial Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/monetize-addins-through-microsoft-commercial-marketplace.md
Your offer must also use the SaaS fulfillment APIs to integrate with Commercial
### Sign up for Partner Center To begin submitting your SaaS offer, you must create an account in the Commercial Marketplace program in Partner Center. This account must be associated with a company.-- If you're new to Partner Center, and have never enrolled in the Microsoft Partner Network, see [Create an account using the Partner Center enrollment page](/azure/marketplace/partner-center-portal/create-account#create-an-account-using-the-partner-center-enrollment-page).
+- If you're new to Partner Center, and have never enrolled in the Microsoft Cloud Partner Program, see [Create an account using the Partner Center enrollment page](/azure/marketplace/partner-center-portal/create-account#create-an-account-using-the-partner-center-enrollment-page).
- If you're already enrolled in the Microsoft Cloud Partner Program or in a Partner Center developer program, see [Create an account using existing Microsoft Partner Center enrollments](/azure/marketplace/partner-center-portal/create-account#create-an-account-using-existing-microsoft-partner-center-enrollments) for information about how to create your account. ### Register a SaaS application
marketplace Open A Developer Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/open-a-developer-account.md
WeΓÇÖll verify this information during the account creation process.
You can create an account in one of two ways: - If you're new to Partner Center and don't have a Microsoft Network Account, [create an account using the Partner Center enrollment page](#create-an-account-using-the-partner-center-enrollment-page).-- If you're already enrolled in the Microsoft Partner Network, [create an account directly from Partner Center using existing Microsoft Partner Center enrollments](#create-an-account-using-an-existing-partner-center-enrollment).
+- If you're already enrolled in the Microsoft Cloud Partner Program, [create an account directly from Partner Center using existing Microsoft Partner Center enrollments](#create-an-account-using-an-existing-partner-center-enrollment).
### Create an account using the Partner Center enrollment page If you're new to Partner Center, follow the instructions in this section to create your account.
marketplace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/overview.md
Last updated 5/24/2022
# What is the Microsoft commercial marketplace?
-The Microsoft commercial marketplace is a catalog of solutions from our independent software vendor (ISV) partners. As an ISV member of the Microsoft Partner Network, you can create, publish, and manage your commercial marketplace offers in Partner Center. Your solutions are listed together with our Microsoft solutions, connecting you to businesses, organizations, and government agencies around the world.
+The Microsoft commercial marketplace is a catalog of solutions from our independent software vendor (ISV) partners. As an ISV member of the Microsoft Cloud Partner Program, you can create, publish, and manage your commercial marketplace offers in Partner Center. Your solutions are listed together with our Microsoft solutions, connecting you to businesses, organizations, and government agencies around the world.
The commercial marketplace is available in more than 100 countries and regions, and we manage tax payment in many of them. If you sell to established Microsoft customers, they have the added benefit of including commercial marketplace purchases in their existing Microsoft purchase agreements to receive a consolidated invoice from Microsoft.
marketplace Plan Consulting Service Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-consulting-service-offer.md
To learn more about the differences between AppSource and Azure Marketplace, see
To demonstrate to customers your expertise in a field, you must meet a set of eligibility requirements before publishing a consulting service offer. The requirements depend on the product at the core of your offer. The complete list of eligibility requirements for each primary product is in the [certification policies for consulting services](/legal/marketplace/certification-policies#800-consulting-services). > [!NOTE]
-> For some primary products, you must have a Gold or Silver Microsoft competency in your solution area. For more information, see [Microsoft Partner Network Competencies](https://partner.microsoft.com/membership/competencies).
+> For some primary products, you must have a Gold or Silver Microsoft competency in your solution area. For more information, see [Microsoft Cloud Partner Program Competencies](https://partner.microsoft.com/membership/competencies).
## Service type and duration
marketplace Plan Managed Service Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-managed-service-offer.md
Managed Services are Azure Marketplace offers that enable cross-tenant and multi
## Eligibility requirements
-To publish a Managed Service offer, you must have earned a Gold or Silver Microsoft Competency in Cloud Platform. This competency demonstrates your expertise to customers. For more information, see [Microsoft Partner Network Competencies](https://partner.microsoft.com/membership/competencies).
+To publish a Managed Service offer, you must have earned a Gold or Silver Microsoft Competency in Cloud Platform. This competency demonstrates your expertise to customers. For more information, see [Microsoft Cloud Partner Program Competencies](https://partner.microsoft.com/membership/competencies).
Offers must meet all applicable [commercial marketplace certification policies](/legal/marketplace/certification-policies) to be published on Azure Marketplace.
marketplace User Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/user-roles.md
In order to access capabilities related to marketplace or your developer account
> [!NOTE] > For the commercial marketplace program, the Global admin, Business Contributor, Financial Contributor, and Marketer roles are not used. Assigning these roles to users has no effect. Only the Manager and Developer roles grant permissions to users.
-For more information about managing roles and permissions in other areas of Partner Center, such as Azure Active Directory (AD), Cloud Solution Provider (CSP), Control Panel Vendor (CPV), Guest users, or Microsoft Partner Network, see [Assign users roles and permissions in Partner Center](/partner-center/permissions-overview).
+For more information about managing roles and permissions in other areas of Partner Center, such as Azure Active Directory (AD), Cloud Solution Provider (CSP), Control Panel Vendor (CPV), Guest users, or Microsoft Cloud Partner Program, see [Assign users roles and permissions in Partner Center](/partner-center/permissions-overview).
> [!NOTE] > Any user management, role assignment activities done on these lines will be in context of the account you are on. Refer to section on switching between seller if you need to manage a different account.
network-function-manager Delete Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-function-manager/delete-functions.md
+
+ Title: 'Tutorial: Delete network functions on Azure Stack Edge'
+
+description: In this tutorial, learn how to delete a network function as a managed application.
+++ Last updated : 05/10/2022+++
+# Tutorial: Delete network functions on Azure Stack Edge
+
+In this tutorial, you learn how to delete Azure Network Function Manager - Network Function and Azure Network Function Manager - Device using the Azure portal.
++
+## Delete network function
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to the **Azure Network Manager - Devices** resource in which you have deployed a network function and select **Network Function**.
+ ![Screenshot that shows how to select a network function.](media/delete-functions/select-network-function.png)
+
+1. Select **Delete** Network Function.
+ ![Screenshot that shows how to delete a network function.](media/delete-functions/delete-network-function.png)
+
+ > [!NOTE]
+ > Incase you encounter following error while deleting the network function.
+ > *Failed to delete resource. Error: The client 'user@mail.com' with object id 'xxxx-9999-xxxx-9999-xxxx' has permission to perform action 'Microsoft.HybridNetwork/networkFunctions/delete' on scope 'mrg-ResourceGroup/providers/Microsoft.HybridNetwork/networkFunctions/NetworkFunction01'; however, the access is denied because of the deny assignment with name 'System deny assignment created by managed application /subscriptions/xxxx-0000-xxxx-0000-xxxx/resourceGroups/ResourceGroup/providers/Microsoft.Solutions/applications/managedApplication01' and Id 'xxxxxxxxxxxxxxxxxxxxxx' at scope '/subscriptions/xxxx-0000-xxxx-0000-xxxx/resourceGroups/mrg-ResourceGroup and refer **Step 4**.*
+ > ![Screenshot that shows an error for failed to delete.](media/delete-functions/failed-to-delete.png)
+
+1. Navigate to search box within the **Azure portal** and search for the **Managed Application** which was seen as an exception in **Step 3**.
+ ![Screenshot that shows a managed application.](media/delete-functions/managed-application.png)
+
+1. Select **Delete** Managed Application
+ ![Screenshot that shows how to delete a managed application.](media/delete-functions/delete-managed-application.png)
+
+## Delete network function manager - device
+
+ > [!IMPORTANT]
+ > Ensure that all the Network Function deployed within the Azure Network Function Manager is deleted before proceeding to the next step.
+ >
+
+1. Navigate to the **Azure Network Manager - Devices** resource in which you have deleted a network function and select **Delete** Azure Network Function Manager - Device
+ ![Screenshot that shows how to delete a network function manager.](media/delete-functions/delete-network-function-manager.png)
network-function-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-function-manager/faq.md
You can register the Azure Stack Edge device and Network Function Manager resour
Check with your network function partner on the billing cycle for network functions deployed using Network Function Manager. Each partner will have a different billing policy for their network function offerings.
+### Does Network Function Manager support move of resources?
+
+Network Function Manager supports moving resources across resource groups and subscriptions in the same region. Moving network function resources cross-region is not supported due to dependencies on other regional resources.
+ ## Next steps For more information, see the [Overview](overview.md).
network-watcher Network Watcher Packet Capture Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-cli.md
az storage account list
At this point, you are ready to create a packet capture. First, let's examine the parameters you may want to configure. Filters are one such parameter that can be used to limit the data that is stored by the packet capture. The following example sets up a packet capture with several filters. The first three filters collect outgoing TCP traffic only from local IP 10.0.0.3 to destination ports 20, 80 and 443. The last filter collects only UDP traffic. ```azurecli-interactive
-az network watcher packet-capture create --resource-group {resourceGroupName} --vm {vmName} --name packetCaptureName --storage-account {storageAccountName} --filters "[{\"protocol\":\"TCP\", \"remoteIPAddress\":\"1.1.1.1-255.255.255\",\"localIPAddress\":\"10.0.0.3\", \"remotePort\":\"20\"},{\"protocol\":\"TCP\", \"remoteIPAddress\":\"1.1.1.1-255.255.255\",\"localIPAddress\":\"10.0.0.3\", \"remotePort\":\"80\"},{\"protocol\":\"TCP\", \"remoteIPAddress\":\"1.1.1.1-255.255.255\",\"localIPAddress\":\"10.0.0.3\", \"remotePort\":\"443\"},{\"protocol\":\"UDP\"}]"
+az network watcher packet-capture create --resource-group {resourceGroupName} --vm {vmName} --name packetCaptureName --storage-account {storageAccountName} --filters "[{\"protocol\":\"TCP\", \"remoteIPAddress\":\"1.1.1.1-255.255.255.255\",\"localIPAddress\":\"10.0.0.3\", \"remotePort\":\"20\"},{\"protocol\":\"TCP\", \"remoteIPAddress\":\"1.1.1.1-255.255.255.255\",\"localIPAddress\":\"10.0.0.3\", \"remotePort\":\"80\"},{\"protocol\":\"TCP\", \"remoteIPAddress\":\"1.1.1.1-255.255.255.255\",\"localIPAddress\":\"10.0.0.3\", \"remotePort\":\"443\"},{\"protocol\":\"UDP\"}]"
``` The following example is the expected output from running the `az network watcher packet-capture create` command.
The following example is the expected output from running the `az network watche
"localIpAddress": "10.0.0.3", "localPort": "", "protocol": "TCP",
- "remoteIpAddress": "1.1.1.1-255.255.255",
+ "remoteIpAddress": "1.1.1.1-255.255.255.255",
"remotePort": "20" }, { "localIpAddress": "10.0.0.3", "localPort": "", "protocol": "TCP",
- "remoteIpAddress": "1.1.1.1-255.255.255",
+ "remoteIpAddress": "1.1.1.1-255.255.255.255",
"remotePort": "80" }, { "localIpAddress": "10.0.0.3", "localPort": "", "protocol": "TCP",
- "remoteIpAddress": "1.1.1.1-255.255.255",
+ "remoteIpAddress": "1.1.1.1-255.255.255.255",
"remotePort": "443" }, {
notification-hubs Cross Region Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/cross-region-recovery.md
+
+ Title: Azure Notification Hubs cross-region disaster recovery
+description: Learn about cross-region disaster recovery options in Azure Notification Hubs.
++++ Last updated : 10/07/2022+++
+# Cross-region disaster recovery (preview)
+
+> [!NOTE]
+> The ability to edit your cross region disaster recovery options is available in preview. If you are interested in using this feature, contact your customer success manager at Microsoft, or create an Azure support ticket which will be triaged by the support team.
+
+[Azure Notification Hubs](notification-hubs-push-notification-overview.md) provides an easy-to-use and scaled-out push engine that enables you to send notifications to any platform (iOS, Android, Windows, etc.) from any back-end (cloud or on-premises). This article describes the cross-region disaster recovery configuration options currently available.
+
+Cross-region disaster recovery provides *metadata* disaster recovery coverage. This is supported in paired and flexible region recovery
+options. Each Azure region is paired with another region within the same geography. All Notification Hubs tiers support [Azure paired regions](/azure/availability-zones/cross-region-replication-azure#azure-cross-region-replication-pairings-for-all-geographies)
+(where available) or a flexible recovery region option that enables you to choose from a list of supported regions.
+
+## Enable cross region disaster recovery
+
+Cross-region disaster recovery options can be modified at any time.
+
+### Use existing namespace
+
+Use the Azure portal to edit an existing namespace:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Select **All services** on the left menu.
+
+3. Select **Notification Hub Namespaces** in the **Internet of Things** section.
+
+4. On the **Notification Hub Namespaces** page, select the namespace for which you want to modify the disaster recovery settings.
+
+5. On the **Notification Hub Namespace** page for your namespace, you can see the current disaster recovery setting in the **Essentials** section.
+
+6. In the following example, paired recovery region is enabled. To modify your disaster recovery region selection, select the **(edit)** link next to the current selection.
+
+ :::image type="content" source="media/cross-region-recovery/cedr1.png" alt-text="Azure portal namespace":::
+
+7. On the **Edit Disaster** recovery pop-up screen, you can change your selections. Save your changes.
+
+ :::image type="content" source="media/cross-region-recovery/cedr2.png" alt-text="Azure portal edit recovery":::
+
+### Use new namespace
+
+To create a new namespace with disaster recovery, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. Select **All services** on the left menu.
+
+3. Select **Notification Hubs** in the **Mobile** section.
+
+4. Select the star icon next to the service name to add the service to the **FAVORITES** section on the left menu. After you add **Notification Hubs** to **FAVORITES**, select it on the left menu.
+
+ :::image type="content" source="media/cross-region-recovery/cedr3.png" alt-text="Azure portal favorites":::
+
+5. On the **Notification Hubs** page, select **Create** on the toolbar.
+
+ :::image type="content" source="media/cross-region-recovery/cedr4.png" alt-text="Create notification hub":::
+
+6. In the **Basics** tab on the **Notification Hub** page, perform the following steps:
+
+ 1. In **Subscription**, select the name of the Azure subscription you want to use, and then select an existing resource group, or create a
+ new one.
+ 1. Enter a unique name for the new namespace in **Namespace Details**.
+ 1. A namespace contains one or more notification hubs, so type a name for the hub in **Notification Hub Details**. Or, select an existing
+ namespace from the drop-down.
+ 1. Select a value from the **Location** drop-down list box. This value specifies the location in which you want to create the hub.
+ 1. Choose your **Disaster recovery** option ΓÇô None, Paired recovery region or Flexible recovery region. If you choose **Paired recovery region**, the failover region is displayed.
+
+ :::image type="content" source="media/cross-region-recovery/cedr5.png" alt-text="Notification hub properties":::
+
+ 1. If you select **Flexible recovery region**, use the drop-down to choose from a list of recovery regions.
+
+ :::image type="content" source="media/cross-region-recovery/cedr6.png" alt-text="Select region":::
+
+ 1. Select **Create**.
+
+### Add resiliency
+
+Paired and flexible region recovery only backs up metadata. You must implement a solution to repopulate the registration data into your new
+hub post-recovery:
+
+1. Create a secondary notification hub in a different datacenter. We recommend creating one from the beginning, to shield you from a disaster recovery event that might affect your management capabilities. You can also create one at the time of the disaster recovery event.
+
+2. Keep the secondary notification hub in sync with the primary notification hub using one of the following options:
+ - Use an app backend that simultaneously creates and updates installations in both notification hubs. Installations allow you to specify your own unique device identifier, making it more suitable for the replication scenario. For more information, [see this sample](https://github.com/Azure/azure-notificationhubs-dotnet/tree/main/Samples/RedundantHubSample).
+ - Use an app backend that gets a regular dump of registrations from the primary notification hub as a backup. It can then perform a bulk insert into the secondary notification hub.
+
+The secondary notification hub might end up with expired installations/registrations. When the push is made to an expired handle, Notification Hubs automatically cleans the associated installation/registration record based on the response received from the PNS server. To clean expired records from a secondary notification hub, add custom logic that processes feedback from each send. Then, expire installation/registration in the secondary notification hub.
+
+If you don't have a backend, when the app starts on target devices, they perform a new registration in the secondary notification hub. Eventually the secondary notification hub will have all the active devices registered.
+
+There will be a time period when devices with unopened apps won't receive notifications.
+
+## Next steps
+
+- [Azure Notification Hubs](notification-hubs-push-notification-overview.md)
purview How To Use Workflow Http Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-use-workflow-http-connector.md
+
+ Title: Workflow HTTP connector
+description: This article describes how to use HTTP connector in Purview workflows
+++++ Last updated : 09/30/2022+++
+# Workflows HTTP connector
++
+You can use [workflows](concept-workflow.md) to automate some business processes through Microsoft Purview. HTTP connector allows Purview workflows to integrate with external applications. HTTP connectors use Representational State Transfer (REST) architecture, which allows Microsoft Purview workflows to interact directly with third party applications by using web requests.
+
+HTTP connector is available in all workflow templates.
+
+>[!NOTE]
+> To create or edit a workflow, you need the [workflow admin role](catalog-permissions.md) in Microsoft Purview. You can also contact the workflow admin in your collection, or reach out to your collection administrator, for permissions.
+
+1. To add a HTTP connector, click on the **+** icon in the template where you want to add and select HTTP connector.
+
+ :::image type="content" source="./media/how-to-use-workflow-http-connector/add-http-connector.png" alt-text="Screenshot of how to add HTTP connector.":::
+
+1. Once you select HTTP connector, you will see the following parameters:
+ 1. Host - Request URL you want to call when this connector is executed.
+ 1. Method - Select one of the following methods. GET, PUT, PATCH, POST and DELETE. These correspond to create, read, update and delete operations.
+ 1. Path - Optionally you can enter request URL Path. You can use dynamic content for this parameter.
+ 1. Headers - Optionally, you can enter HTTP headers. HTTP headers let the client and the server pass additional information with an HTTP request or response
+ 1. Queries - Optionally, you can pass queries.
+ 1. Body - Optionally, you can pass HTTP body while invoking the URL
+ 1. Authentication - HTTP connector is integrated with Purview credentials. Depending on the URL you may invoke the endpoint with None (no authentication) or you can use credentials to create a basic authentication. To learn more about credentials see the [Microsoft Purview credentials article](manage-credentials.md).
+
+ :::image type="content" source="./media/how-to-use-workflow-http-connector/add-http-properties.png" alt-text="Screenshot of how to add HTTP connector properties.":::
+
+1. By default, secure settings are turned on for HTTP connectors. To turn OFF secure inputs and outputs select the ellipsis icon (**...**) to go to settings.
+
+ :::image type="content" source="./media/how-to-use-workflow-http-connector/add-http-settings.png" alt-text="Screenshot of how to add HTTP connector settings.":::
+
+1. You will be now presented with the settings for HTTP connector and you can turn secure inputs and outputs OFF.
+
+ :::image type="content" source="./media/how-to-use-workflow-http-connector/add-http-secure.png" alt-text="Screenshot of how to add HTTP connector secure input and outputs.":::
+
+## Next steps
+
+For more information about workflows, see these articles:
+
+- [Workflows in Microsoft Purview](concept-workflow.md)
+- [Approval workflow for business terms](how-to-workflow-business-terms-approval.md)
+- [Manage workflow requests and approvals](how-to-workflow-manage-requests-approvals.md)
+
purview Manage Kafka Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-kafka-dotnet.md
This quickstart teaches you how to send and receive *Atlas Kafka* topics events. We'll make use of *Azure Event Hubs* and the **Azure.Messaging.EventHubs** .NET library. > [!IMPORTANT]
-> A managed event hub is created automatically when your *Microsoft Purview* account is created. See, [Purview account creation](create-catalog-portal.md). You can publish messages to Event Hubs Kafka topic, ATLAS_HOOK. Purview will receive it, process it and notify Kafka topic ATLAS_ENTITIES of entity changes. This quickstart uses the new **Azure.Messaging.EventHubs** library.
+> A managed event hub is created automatically when your *Microsoft Purview* account is created. See, [Microsoft Purview account creation](create-catalog-portal.md). You can publish messages to Event Hubs Kafka topic, ATLAS_HOOK. Microsoft Purview will receive it, process it and notify Kafka topic ATLAS_ENTITIES of entity changes. This quickstart uses the new **Azure.Messaging.EventHubs** library.
## Prerequisites
To follow this quickstart, you need certain prerequisites in place:
>[!NOTE] >Enabling this Event Hubs namespace does incur a cost for the namespace. For specific details, see [the pricing page](https://azure.microsoft.com/pricing/details/purview/).
-## Publish messages to Purview
-Let's create a .NET Core console application that sends events to Purview via Event Hubs Kafka topic, **ATLAS_HOOK**.
+## Publish messages to Microsoft Purview
+Let's create a .NET Core console application that sends events to Microsoft Purview via Event Hubs Kafka topic, **ATLAS_HOOK**.
## Create a Visual Studio project
Next create a C# .NET console application in Visual Studio:
private const string eventHubName = "<EVENT HUB NAME>"; ```
- You can get the Event Hubs namespace associated with the Purview account by looking at the Atlas kafka endpoint primary/secondary connection strings. These can be found in **Properties** tab of your Purview account.
+ You can get the Event Hubs namespace associated with the Microsoft Purview account by looking at the Atlas kafka endpoint primary/secondary connection strings. These can be found in **Properties** tab of your account.
:::image type="content" source="media/manage-eventhub-kafka-dotnet/properties.png" alt-text="A screenshot that shows an Event Hubs Namespace.":::
- The event hub name for sending messages to Purview is **ATLAS_HOOK**.
+ The event hub name for sending messages to Microsoft Purview is **ATLAS_HOOK**.
-3. Replace the `Main` method with the following `async Main` method and add an `async ProduceMessage` to push messages into Purview. See the comments in the code for details.
+3. Replace the `Main` method with the following `async Main` method and add an `async ProduceMessage` to push messages into Microsoft Purview. See the comments in the code for details.
```csharp static async Task Main()
Next create a C# .NET console application in Visual Studio:
```
-## Receive Purview messages
-Next learn how to write a .NET Core console application that receives messages from event hubs using an event processor. The event processor manages persistent checkpoints and parallel receptions from event hubs. This simplifies the process of receiving events. You need to use the ATLAS_ENTITIES event hub to receive messages from Purview.
+## Receive Microsoft Purview messages
+Next learn how to write a .NET Core console application that receives messages from event hubs using an event processor. The event processor manages persistent checkpoints and parallel receptions from event hubs. This simplifies the process of receiving events. You need to use the ATLAS_ENTITIES event hub to receive messages from Microsoft Purview.
> [!WARNING] > Event Hubs SDK uses the most recent version of Storage API available. That version may not necessarily be available on your Stack Hub platform. If you run this code on Azure Stack Hub, you will experience runtime errors unless you target the specific version you are using. If you're using Azure Blob Storage as a checkpoint store, review the [supported Azure Storage API version for your Azure Stack Hub build](/azure-stack/user/azure-stack-acs-differences?#api-version) and in your code, target that version.
We'll use Azure Storage as the checkpoint store. Use the following steps to crea
private const string blobContainerName = "<BLOB CONTAINER NAME>"; ```
- You can get event hub namespace associated with your Purview account by looking at your Atlas kafka endpoint primary/secondary connection strings. This can be found in the **Properties** tab of your Purview account.
+ You can get event hub namespace associated with your Microsoft Purview account by looking at your Atlas kafka endpoint primary/secondary connection strings. This can be found in the **Properties** tab of your Microsoft Purview account.
:::image type="content" source="media/manage-eventhub-kafka-dotnet/properties.png" alt-text="A screenshot that shows an Event Hubs Namespace.":::
- Use **ATLAS_ENTITIES** as the event hub name when sending messages to Purview.
+ Use **ATLAS_ENTITIES** as the event hub name when sending messages to Microsoft Purview.
3. Replace the `Main` method with the following `async Main` method. See the comments in the code for details.
We'll use Azure Storage as the checkpoint store. Use the following steps to crea
> For the complete source code with more informational comments, see [this file on the GitHub](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/Sample01_HelloWorld.md). 6. Run the receiver application.
-### An example of a Message received from Purview
+### An example of a Message received from Microsoft Purview
```json {
purview Reference Microsoft Purview Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/reference-microsoft-purview-glossary.md
A verification of identity or tool used in an access control system. Credentials
A searchable inventory of assets and their associated metadata that allows users to find and curate data across a data estate. The Data Catalog also includes a business glossary where subject matter experts can provide terms and definitions to add a business context to an asset. ## Data curator A role that provides access to the data catalog to manage assets, configure custom classifications, set up glossary terms, and view insights. Data curators can create, read, modify, move, and delete assets. They can also apply annotations to assets.
+## Data Estate Insights
+An area of the Microsoft Purview governance portal that provides up-to-date reports and actionable insights about the data estate.
## Data map A metadata repository that is the foundation of the Microsoft Purview governance portal. The data map is a graph that describes assets across a data estate and is populated through scans and other data ingestion processes. This graph helps organizations understand and govern their data by providing rich descriptions of assets, representing data lineage, classifying assets, storing relationships between assets, and housing information at both the technical and semantic layers. The data map is an open platform that can be interacted with and accessed through Apache Atlas APIs or the Microsoft Purview governance portal. ## Data map operation
A scan that detects and processes assets that have been created, modified, or de
## Ingested asset An asset that has been scanned, classified (when applicable), and added to the Microsoft Purview Data Map. Ingested assets are discoverable and consumable within the data catalog through automated scanning or external connections, such as Azure Data Factory and Azure Synapse. ## Insight reader
-A role that provides read-only access to insights reports for collections where the insights reader also has the **Data reader** role.
-## Data Estate Insights
-An area of the Microsoft Purview governance portal that provides up-to-date reports and actionable insights about the data estate.
+A role that provides read-only access to Data Estate Insights reports. Insight readers must have at least data reader role access to a collection to view reports about that specific collection.
## Integration runtime The compute infrastructure used to scan in a data source. ## Lineage
An area within the Microsoft Purview Governance Portal where you can manage conn
The minimum percentage of matches among the distinct data values in a column that must be found by the scanner for a classification to be applied. For example, a minimum match threshold of 60% for employee ID requires that 60% of all distinct values among the sampled data in a column match the data pattern set for employee ID. If the scanner samples 128 values in a column and finds 60 distinct values in that column, then at least 36 of the distinct values (60%) must match the employee ID data pattern for the classification to be applied.
+## Physical asset
+An asset that represents a physical data object. Physical assets are different from business assets because they represent real data. For example, a database is a physical asset.
## Policy A statement or collection of statements that controls how access to data and data sources should be authorized. ## Object type
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen2.md
Previously updated : 06/17/2022 Last updated : 10/04/2022 # Connect to Azure Data Lake Storage in Microsoft Purview
Source storage account can support up to 20 targets, and target storage account
## Access policy
-To create an access policy for Azure Data Lake Storage Gen 2, follow these guides:
-* [Single storage account](./how-to-policies-data-owner-storage.md) - This guide will allow you to enable access policies on a single Azure Storage account in your subscription.
-* [All sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md) - This guide will allow you to enable access policies on all enabled and available sources in a resource group, or across an Azure subscription.
+### Access policy pre-requisites on Azure Storage accounts
-## Next steps
+### Configure the Microsoft Purview account for policies
+
+### Register the data source in Microsoft Purview for Data Use Management
+The Azure Storage resource needs to be registered first with Microsoft Purview before you can create access policies.
+To register your resource, follow the **Prerequisites** and **Register** sections of this guide:
+- [Register and scan Azure Data Lake Storage (ADLS) Gen2 - Microsoft Purview](register-scan-adls-gen2.md#prerequisites)
+
+After you've registered the data source, you'll need to enable Data Use Management. This is a pre-requisite before you can create policies on the data source. Data Use Management can impact the security of your data, as it delegates to certain Microsoft Purview roles managing access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
-Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data.
+Once your data source has the **Data Use Management** option set to **Enabled**, it will look like this screenshot:
+![Screenshot shows how to register a data source for policy with the option Data use management set to enable](./media/how-to-policies-data-owner-storage/register-data-source-for-policy-storage.png)
+
+### Create a policy
+To create an access policy for Azure Data Lake Storage Gen2, follow these guides:
+* [Data owner policy on a single storage account](./how-to-policies-data-owner-storage.md#create-and-publish-a-data-owner-policy) - This guide will allow you to provision access on a single Azure Storage account in your subscription.
+* [Data owner policy covering all sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md) - This guide will allow you to provision access on all enabled data sources in a resource group, or across an Azure subscription. The pre-requisite is that the subscription or resource group is registered with the Data use management option enabled.
+
+## Next steps
+Follow the below guides to learn more about Microsoft Purview and your data.
+- [Data owner policies in Microsoft Purview](concept-policies-data-owner.md)
- [Data Estate Insights in Microsoft Purview](concept-insights.md) - [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Data share in Microsoft Purview](concept-data-share.md)
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-blob-storage-source.md
Previously updated : 06/17/2022 Last updated : 10/04/2022
Source storage account can support up to 20 targets, and target storage account
## Access policy
-To create an access policy for Azure Blob Storage, follow these guides:
-* [Single storage account](./how-to-policies-data-owner-storage.md) - This guide will allow you to enable access policies on a single Azure Storage account in your subscription.
-* [All sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md) - This guide will allow you to enable access policies on all enabled and available sources in a resource group, or across an Azure subscription.
+### Access policy pre-requisites on Azure Storage accounts
+### Configure the Microsoft Purview account for policies
-## Next steps
+### Register the data source in Microsoft Purview for Data Use Management
+The Azure Storage resource needs to be registered first with Microsoft Purview before you can create access policies.
+To register your resource, follow the **Prerequisites** and **Register** sections of this guide:
+- [Register and scan Azure Storage Blob - Microsoft Purview](register-scan-azure-blob-storage-source.md#prerequisites)
+
+After you've registered the data source, you'll need to enable Data Use Management. This is a pre-requisite before you can create policies on the data source. Data Use Management can impact the security of your data, as it delegates to certain Microsoft Purview roles managing access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
+
+Once your data source has the **Data Use Management** option set to **Enabled**, it will look like this screenshot:
+![Screenshot shows how to register a data source for policy with the option Data use management set to enable](./media/how-to-policies-data-owner-storage/register-data-source-for-policy-storage.png)
-Now that you have registered your source, follow the below guides to learn more about Microsoft Purview and your data.
+### Create a policy
+To create an access policy for Azure Blob Storage, follow these guides:
+* [Data owner policy on a single storage account](./how-to-policies-data-owner-storage.md#create-and-publish-a-data-owner-policy) - This guide will allow you to provision access on a single Azure Storage account in your subscription.
+* [Data owner policy covering all sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md) - This guide will allow you to provision access on all enabled data sources in a resource group, or across an Azure subscription. The pre-requisite is that the subscription or resource group is registered with the Data use management option enabled.
+
+## Next steps
-* [Data Estate Insights in Microsoft Purview](concept-insights.md)
-* [Data Sharing in Microsoft Purview](concept-data-share.md)
-* [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)
-* [Search Data Catalog](how-to-search-catalog.md)
+Follow the below guides to learn more about Microsoft Purview and your data.
+- [Data owner policies in Microsoft Purview](concept-policies-data-owner.md)
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
+- [Data Sharing in Microsoft Purview](concept-data-share.md)
+- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)
+- [Search Data Catalog](how-to-search-catalog.md)
purview Register Scan Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-sql-database.md
Previously updated : 04/26/2022 Last updated : 10/04/2022 # Connect to Azure SQL Database in Microsoft Purview
Scans can be managed or run again on completion
## Access policy
+### Access policy pre-requisites on Azure SQL Database
+
+### Configure the Microsoft Purview account for policies
+
+### Register the data source and enable Data use management
+The Azure SQL Database resource needs to be registered first with Microsoft Purview before you can create access policies.
+To register your resources, follow the **Prerequisites** and **Register** sections of this guide:
+[Register Azure SQL Database](./register-scan-azure-sql-database.md#prerequisites)
+
+After you've registered the data source, you'll need to enable Data Use Management. This is a pre-requisite before you can create policies on the data source. Data Use Management can impact the security of your data, as it delegates to certain Microsoft Purview roles managing access to the data sources. **Go through the secure practices related to Data Use Management in this guide**: [How to enable Data Use Management](./how-to-enable-data-use-management.md)
+
+Once your data source has the **Data Use Management** option *Enabled*, it will look like this screenshot.
+![Screenshot shows how to register a data source for policy.](./media/how-to-policies-data-owner-sql/register-data-source-for-policy-azure-sql-db.png)
+
+### Create a policy
To create an access policy for Azure SQL Database, follow these guides:
-* [Single SQL account](./how-to-policies-data-owner-azure-sql-db.md) - This guide will allow you to enable access policies on a single Azure SQL Database account in your subscription.
-* [All data sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md) - This guide will allow you to enable access policies on all enabled and available sources in a resource group, or across an Azure subscription.
+* [Data owner policy on a single Azure SQL Database account](./how-to-policies-data-owner-azure-sql-db.md#create-and-publish-a-data-owner-policy) - This guide will allow you to provision access on a single Azure SQL Database account in your subscription.
+* [Data owner policy covering all sources in a subscription or resource group](./how-to-policies-data-owner-resource-group.md) - This guide will allow you to provision access on all enabled data sources in a resource group, or across an Azure subscription. The pre-requisite is that the subscription or resource group is registered with the Data use management option enabled.
## Lineage (Preview) <a id="lineagepreview"></a>
You can [browse data catalog](how-to-browse-catalog.md) or [search data catalog]
## Next steps
-Now that you've registered your source, follow the below guides to learn more about Microsoft Purview and your data.
-
+Follow the below guides to learn more about Microsoft Purview and your data.
+- [Data owner policies in Microsoft Purview](concept-policies-data-owner.md)
- [Data Estate Insights in Microsoft Purview](concept-insights.md) - [Lineage in Microsoft Purview](catalog-lineage-user-guide.md) - [Search Data Catalog](how-to-search-catalog.md)
search Search Howto Dotnet Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-dotnet-sdk.md
Title: Use Azure.Search.Documents (v11) in .NET
-description: Learn how to create and manage search objects in a .NET application using C# and the Azure.Search.Documents (v11) client library. Code snippets demonstrate connecting to the service, creating indexes, and queries.
+description: Learn how to create and manage search objects in a .NET application using C# and the Azure.Search.Documents (v11) client library.
ms.devlang: csharp - Previously updated : 08/15/2022+ Last updated : 10/04/2022 + # How to use Azure.Search.Documents in a C# .NET Application This article explains how to create and manage search objects using C# and the [**Azure.Search.Documents**](/dotnet/api/overview/azure/search) (version 11) client library in the Azure SDK for .NET.
The client library defines classes like `SearchIndex`, `SearchField`, and `Searc
Azure.Search.Documents (version 11) targets version [`2020-06-30` of the Azure Cognitive Search REST API](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/search/data-plane/Azure.Search/preview/2020-06-30).
-The client library does not provide [service management operations](/rest/api/searchmanagement/), such as creating and scaling search services and managing API keys. If you need to manage your search resources from a .NET application, use the [Microsoft.Azure.Management.Search](/dotnet/api/overview/azure/search/management) library in the Azure SDK for .NET.
+The client library doesn't provide [service management operations](/rest/api/searchmanagement/), such as creating and scaling search services and managing API keys. If you need to manage your search resources from a .NET application, use the [Microsoft.Azure.Management.Search](/dotnet/api/overview/azure/search/management) library in the Azure SDK for .NET.
## Upgrade to v11
-If you have been using the previous version of the .NET SDK and you'd like to upgrade to the current generally available version, see [Upgrade to Azure Cognitive Search .NET SDK version 11](search-dotnet-sdk-migration-version-11.md)
+If you have been using the previous version of the .NET SDK and you'd like to upgrade to the current generally available version, see [Upgrade to Azure Cognitive Search .NET SDK version 11](search-dotnet-sdk-migration-version-11.md).
## SDK requirements
If you have been using the previous version of the .NET SDK and you'd like to up
+ Download the [Azure.Search.Documents package](https://www.nuget.org/packages/Azure.Search.Documents) using **Tools** > **NuGet Package Manager** > **Manage NuGet Packages for Solution** in Visual Studio. Search for the package name `Azure.Search.Documents`.
-Azure SDK for .NET conforms to [.NET Standard 2.0](/dotnet/standard/net-standard#net-implementation-support), which means .NET Framework 4.6.1 and .NET Core 2.0 as minimum requirements.
+Azure SDK for .NET conforms to [.NET Standard 2.0](/dotnet/standard/net-standard#net-implementation-support).
## Example application
static void Main(string[] args)
Next is a partial screenshot of the output, assuming you run this application with a valid service name and API keys: ### Client types
private static SearchClient CreateSearchClientForQueries(string indexName, IConf
### Deleting the index
-In the early stages of development, you might want to include a [`DeleteIndex`](/dotnet/api/azure.search.documents.indexes.searchindexclient.deleteindex) statement to delete a work-in-progress index so that you can recreate it with an updated definition. Sample code for Azure Cognitive Search often includes a deletion step so that you can re-run the sample.
+In the early stages of development, you might want to include a [`DeleteIndex`](/dotnet/api/azure.search.documents.indexes.searchindexclient.deleteindex) statement to delete a work-in-progress index so that you can recreate it with an updated definition. Sample code for Azure Cognitive Search often includes a deletion step so that you can rerun the sample.
The following line calls `DeleteIndexIfExists`:
public partial class Hotel
When defining fields, you can use the base [`SearchField`](/dotnet/api/azure.search.documents.indexes.models.searchfield) class, or you can use derivative helper models that serve as "templates", with pre-configured properties.
-Exactly one field in your index must serve as the document key (`IsKey = true`). It must be a string, and it must uniquely identify each document. It's also required to have `IsHidden = true`, which means it cannot be visible in search results.
+Exactly one field in your index must serve as the document key (`IsKey = true`). It must be a string, and it must uniquely identify each document. It's also required to have `IsHidden = true`, which means it can't be visible in search results.
| Field type | Description and usage | ||--|
Did you happen to notice the `SmokingAllowed` property?
public bool? SmokingAllowed => (Rooms != null) ? Array.Exists(Rooms, element => element.SmokingAllowed == true) : (bool?)null; ```
-The `JsonIgnore` attribute on this property tells the `FieldBuilder` to not serialize it to the index as a field. This is a great way to create client-side calculated properties you can use as helpers in your application. In this case, the `SmokingAllowed` property reflects whether any `Room` in the `Rooms` collection allows smoking. If all are false, it indicates that the entire hotel does not allow smoking.
+The `JsonIgnore` attribute on this property tells the `FieldBuilder` to not serialize it to the index as a field. This is a great way to create client-side calculated properties you can use as helpers in your application. In this case, the `SmokingAllowed` property reflects whether any `Room` in the `Rooms` collection allows smoking. If all are false, it indicates that the entire hotel doesn't allow smoking.
## Load an index
Second, define a method that sends a query request.
Each time the method executes a query, it creates a new [`SearchOptions`](/dotnet/api/azure.search.documents.searchoptions) object. This object is used to specify additional options for the query such as sorting, filtering, paging, and faceting. In this method, we're setting the `Filter`, `Select`, and `OrderBy` property for different queries. For more information about the search query expression syntax, [Simple query syntax](/rest/api/searchservice/Simple-query-syntax-in-Azure-Search).
-The next step is query execution. Running the search is done using the `SearchClient.Search` method. For each query, pass the search text to use as a string (or `"*"` if there is no search text), plus the search options created earlier. We also specify `Hotel` as the type parameter for `SearchClient.Search`, which tells the SDK to deserialize documents in the search results into objects of type `Hotel`.
+The next step is query execution. Running the search is done using the `SearchClient.Search` method. For each query, pass the search text to use as a string (or `"*"` if there's no search text), plus the search options created earlier. We also specify `Hotel` as the type parameter for `SearchClient.Search`, which tells the SDK to deserialize documents in the search results into objects of type `Hotel`.
```csharp private static void RunQueries(SearchClient searchClient)
RunQueries(indexClientForQueries);
### Explore query constructs
-Let's take a closer look at each of the queries in turn. Here is the code to execute the first query:
+Let's take a closer look at each of the queries in turn. Here's the code to execute the first query:
```csharp options = new SearchOptions();
results = searchClient.Search<Hotel>("*", options);
The above query uses an OData `$filter` expression, `Rooms/any(r: r/BaseRate lt 100)`, to filter the documents in the index. This uses the [any operator](./search-query-odata-collection-operators.md) to apply the 'BaseRate lt 100' to every item in the Rooms collection. For more information, see [OData filter syntax](./query-odata-filter-orderby-syntax.md).
-In the third query, find the top two hotels that have been most recently renovated, and show the hotel name and last renovation date. Here is the code:
+In the third query, find the top two hotels that have been most recently renovated, and show the hotel name and last renovation date. Here's the code:
```csharp options =
results = searchClient.Search<Hotel>("hotel", options);
WriteDocuments(results); ```
-This section concludes this introduction to the .NET SDK, but don't stop here. The next section suggests additional resources for learning more about programming with Azure Cognitive Search.
+This section concludes this introduction to the .NET SDK, but don't stop here. The next section suggests other resources for learning more about programming with Azure Cognitive Search.
## Next steps
search Search Indexer Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-tutorial.md
Previously updated : 01/23/2021 Last updated : 10/04/2022
-#Customer intent: As a developer, I want an introduction the indexing Azure SQL data for Azure Cognitive Search.
+ # Tutorial: Index Azure SQL data using the .NET SDK
If you don't have an Azure subscription, create a [free account](https://azure.m
## Prerequisites
-+ [Azure SQL Database](https://azure.microsoft.com/services/sql-database/)
-+ [Visual Studio](https://visualstudio.microsoft.com/downloads/)
-+ [Create](search-create-service-portal.md) or [find an existing search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices)
+* [Azure SQL Database](https://azure.microsoft.com/services/sql-database/)
+* [Visual Studio](https://visualstudio.microsoft.com/downloads/)
+* [Create](search-create-service-portal.md) or [find an existing search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices)
-> [!Note]
+> [!NOTE]
> You can use the free service for this tutorial. A free search service limits you to three indexes, three indexers, and three data sources. This tutorial creates one of each. Before starting, make sure you have room on your service to accept the new resources. ## Download files
If you have an existing Azure SQL Database resource, you can add the hotels tabl
1. Find or create a **SQL Database**. You can use defaults and the lowest level pricing tier. One advantage to creating a server is that you can specify an administrator user name and password, necessary for creating and loading tables in a later step.
- :::image type="content" source="media/search-indexer-tutorial/indexer-new-sqldb.png" alt-text="New database page" border="false":::
+ :::image type="content" source="media/search-indexer-tutorial/indexer-new-sqldb.png" alt-text="Screenshot of the Create SQL Database page in Azure portal." border="true":::
+
+1. Select **Review + create** to deploy the new server and database. Wait for the server and database to deploy. Go to the resource.
+
+1. On the navigate pane, select **Getting started** and then select **Configure** to allow access.
+
+1. Under Public access, click **Selected networks**.
-1. Click **Review + create** to deploy the new server and database. Wait for the server and database to deploy.
+1. Under Firewall rules, add your client IPv4 address. This is the portal client.
-1. On the navigation pane, click **Query editor (preview)** and enter the user name and password of server admin.
+1. Under Exception, select **Allow Azure services and resources to access this server**.
- If access is denied, copy the client IP address from the error message, and then click the **Set server firewall** link to add a rule that allows access from your client computer, using your client IP for the range. It can take several minutes for the rule to take effect.
+1. Save your changes and then close the Networking page.
-1. In Query editor, click **Open query** and navigate to the location of *hotels.sql* file on your local computer.
+1. On the navigation pane, select **Query editor (preview)** and enter the user name and password of server admin.
-1. Select the file and click **Open**. The script should look similar to the following screenshot:
+ You'll probably get an access denied error. Copy the client IP address from the error message. Return to the firewall rules page to add a rule that allows access from your client.
- :::image type="content" source="media/search-indexer-tutorial/sql-script.png" alt-text="SQL script" border="false":::
+1. In Query editor, select **Open query** and navigate to the location of *hotels.sql* file on your local computer.
-1. Click **Run** to execute the query. In the Results pane, you should see a query succeeded message, for 3 rows.
+1. Select the file and select **Open**. The script should look similar to the following screenshot:
+
+ :::image type="content" source="media/search-indexer-tutorial/sql-script.png" alt-text="Screenshot of SQL script in a Query Editor window." border="true":::
+
+1. Select **Run** to execute the query. In the Results pane, you should see a query succeeded message, for three rows.
1. To return a rowset from this table, you can execute the following query as a verification step:
If you have an existing Azure SQL Database resource, you can add the hotels tabl
Server=tcp:{your_dbname}.database.windows.net,1433;Initial Catalog=hotels-db;Persist Security Info=False;User ID={your_username};Password={your_password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30; ```
-You will need this connection string in the next exercise, setting up your environment.
+You'll need this connection string in the next exercise, setting up your environment.
### Azure Cognitive Search
API calls require the service URL and an access key. A search service is created
1. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
- :::image type="content" source="media/search-get-started-rest/get-url-key.png" alt-text="Get an HTTP endpoint and access key" border="false":::
+ :::image type="content" source="media/search-get-started-rest/get-url-key.png" alt-text="Screenshot of Azure portal pages showing the HTTP endpoint and access key location for a search service." border="false":::
## 2 - Set up your environment
API calls require the service URL and an access key. A search service is created
1. In Solution Explorer, open **appsettings.json** to provide connection information.
-1. For `SearchServiceEndPoint`, if the full URL on the service overview page is "https://my-demo-service.search.windows.net", then the value to provide is that URL.
+1. For `SearchServiceEndPoint`, if the full URL on the service overview page is "https://my-demo-service.search.windows.net", then the value to provide is the entire URL.
1. For `AzureSqlConnectionString`, the string format is similar to this: `"Server=tcp:{your_dbname}.database.windows.net,1433;Initial Catalog=hotels-db;Persist Security Info=False;User ID={your_username};Password={your_password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"` ```json {
- "SearchServiceEndPoint": "<placeholder-search-url>",
+ "SearchServiceEndPoint": "<placeholder-search-full-url>",
"SearchServiceAdminApiKey": "<placeholder-admin-key-for-search-service>", "AzureSqlConnectionString": "<placeholder-ADO.NET-connection-string", } ```
-1. In the connection string, make sure the connection string contains a valid password. While the database and user names will copy over, the password must be entered manually.
+1. Replace the user password in the SQL connection string to a valid password. While the database and user names will copy over, the password must be entered manually.
## 3 - Create the pipeline Indexers require a data source object and an index. Relevant code is in two files:
- + **hotel.cs**, containing a schema that defines the index
- + **Program.cs**, containing functions for creating and managing structures in your service
+* **hotel.cs**, containing a schema that defines the index
+
+* **Program.cs**, containing functions for creating and managing structures in your service
### In hotel.cs
A schema can also include other elements, including scoring profiles for boostin
The main program includes logic for creating [an indexer client](/dotnet/api/azure.search.documents.indexes.models.searchindexer), an index, a data source, and an indexer. The code checks for and deletes existing resources of the same name, under the assumption that you might run this program multiple times.
-The data source object is configured with settings that are specific to Azure SQL Database resources, including [partial or incremental indexing](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#CaptureChangedRows) for leveraging the built-in [change detection features](/sql/relational-databases/track-changes/about-change-tracking-sql-server) of Azure SQL. The source demo hotels database in Azure SQL has a "soft delete" column named **IsDeleted**. When this column is set to true in the database, the indexer removes the corresponding document from the Azure Cognitive Search index.
+The data source object is configured with settings that are specific to Azure SQL Database resources, including [partial or incremental indexing](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md#CaptureChangedRows) for using the built-in [change detection features](/sql/relational-databases/track-changes/about-change-tracking-sql-server) of Azure SQL. The source demo hotels database in Azure SQL has a "soft delete" column named **IsDeleted**. When this column is set to true in the database, the indexer removes the corresponding document from the Azure Cognitive Search index.
```csharp Console.WriteLine("Creating data source...");
catch (CloudException e) when (e.Response.StatusCode == (HttpStatusCode)429)
Press F5 to build and run the solution. The program executes in debug mode. A console window reports the status of each operation.
- :::image type="content" source="media/search-indexer-tutorial/console-output.png" alt-text="Console output" border="false":::
+ :::image type="content" source="media/search-indexer-tutorial/console-output.png" alt-text="Screenshot showing the console output for the program." border="true":::
Your code runs locally in Visual Studio, connecting to your search service on Azure, which in turn connects to Azure SQL Database and retrieves the dataset. With this many operations, there are several potential points of failure. If you get an error, check the following conditions first:
-+ Search service connection information that you provide is the full URL. If you entered just the service name, operations stop at index creation, with a failure to connect error.
+* Search service connection information that you provide is the full URL. If you entered just the service name, operations stop at index creation, with a failure to connect error.
-+ Database connection information in **appsettings.json**. It should be the ADO.NET connection string obtained from the portal, modified to include a username and password that are valid for your database. The user account must have permission to retrieve data. Your local client IP address must be allowed inbound access through the firewall.
+* Database connection information in **appsettings.json**. It should be the ADO.NET connection string obtained from the portal, modified to include a username and password that are valid for your database. The user account must have permission to retrieve data. Your local client IP address must be allowed inbound access through the firewall.
-+ Resource limits. Recall that the Free tier has limits of 3 indexes, indexers, and data sources. A service at the maximum limit cannot create new objects.
+* Resource limits. Recall that the Free tier has limits of three indexes, indexers, and data sources. A service at the maximum limit can't create new objects.
## 5 - Search
Use Azure portal to verify object creation, and then use **Search explorer** to
1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, open each list in turn to verify the object is created. **Indexes**, **Indexers**, and **Data Sources** will have "hotels", "azure-sql-indexer", and "azure-sql", respectively.
- :::image type="content" source="media/search-indexer-tutorial/tiles-portal.png" alt-text="Indexer and data source tiles" border="false":::
+ :::image type="content" source="media/search-indexer-tutorial/tiles-portal.png" alt-text="Screenshot of the indexer and data source tiles in the Azure portal search service page." border="true":::
-1. Select the hotels index. On the hotels page, **Search explorer** is the first tab.
+1. On the Indexes tab, select the hotels index. On the hotels page, **Search explorer** is the first tab.
-1. Click **Search** to issue an empty query.
+1. Select **Search** to issue an empty query.
The three entries in your index are returned as JSON documents. Search explorer returns documents in JSON so that you can view the entire structure.
- :::image type="content" source="media/search-indexer-tutorial/portal-search.png" alt-text="Query an index" border="false":::
-
-1. Next, enter a search string: `search=river&$count=true`.
+ :::image type="content" source="media/search-indexer-tutorial/portal-search.png" alt-text="Screenshot of a Search Explorer query for the target index." border="true":::
+
+1. Next, enter a search string: `search=river&$count=true`.
This query invokes full text search on the term `river`, and the result includes a count of the matching documents. Returning the count of matching documents is helpful in testing scenarios when you have a large index with thousands or millions of documents. In this case, only one document matches the query.
-1. Lastly, enter a search string that limits the JSON output to fields of interest: `search=river&$count=true&$select=hotelId, baseRate, description`.
+1. Lastly, enter a search string that limits the JSON output to fields of interest: `search=river&$count=true&$select=hotelId, baseRate, description`.
The query response is reduced to selected fields, resulting in more concise output.
sentinel Ci Cd Custom Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd-custom-content.md
The following Microsoft Sentinel content types can be deployed through a reposit
> This article does *not* describe how to create these types of content from scratch. For more information, see the relevant [Microsoft Sentinel GitHub wiki](https://github.com/Azure/Azure-Sentinel/wiki#get-started) for each content type. >
- Repositories content needs to be stored as [ARM templates](/azure/azure-resource-manager/templates/overview). The repositories deployment pipeline doesn't validate the content except to confirm it's in the correct JSON format.
+ Repositories content needs to be stored as [ARM templates](/azure/azure-resource-manager/templates/overview). The repositories deployment doesn't validate the content except to confirm it's in the correct JSON format.
The first step to validate your content is to test it within Microsoft Sentinel. You can also apply the [Microsoft Sentinel GitHub validation process](https://github.com/Azure/Azure-Sentinel/wiki#test-your-contribution) and tools to complement your validation process.
A sample repository is available with ARM templates for each of the content type
The **smart deployments** feature is a back-end capability that improves performance by actively tracking modifications made to the content files of a connected repository. It uses a CSV file within the '.sentinel' folder in your repository to audit each commit. The workflow avoids redeploying content that hasn't been modified since the last deployment. This process improves your deployment performance and prevents tampering with unchanged content in your workspace, such as resetting dynamic schedules of your analytics rules.
-Smart deployments are enabled by default on newly created connections. If you prefer all source control content to be deployed every time a deployment is triggered, regardless of whether that content was modified or not, you can modify your workflow to disable smart deployments. For more information, see [Customize the deployment workflow](ci-cd.md#customize-the-deployment-workflow).
+Smart deployments are enabled by default on newly created connections. If you prefer all source control content to be deployed every time a deployment is triggered, regardless of whether that content was modified or not, you can modify your workflow to disable smart deployments. For more information, see [Customize the workflow or pipeline](ci-cd-custom-deploy.md#customize-the-workflow-or-pipeline).
> [!NOTE] > This capability was launched in public preview on April 20th, 2022. Connections created prior to launch would need to be updated or recreated for smart deployments to be turned on.
Smart deployments are enabled by default on newly created connections. If you pr
## Consider deployment customization options
-Even with smart deployments enabled, the default behavior is to push all the updated content from the connected repository branch. If the default configuration for your content deployment from GitHub or Azure DevOps doesn't meet all your requirements, you can modify the experience to fit your needs.
+A number of customization options are available to consider when deploying content with Microsoft Sentinel repositories.
-For example, you may want to:
-- turn off smart deployments
+#### Customize the workflow or pipeline
+
+You may want to customize the workflow or pipeline in one of the following ways:
- configure different deployment triggers - deploy content only from a specific root folder for a given workspace - schedule the workflow to run periodically - combine different workflow events together-- prioritize content to be evaluated before the entire repo is enumerated for valid ARM templates
+- turn off smart deployments
+
+These customizations are defined in a .yml file specific to your workflow or pipeline. For more details on how to implement, see [Customize repository deployments](ci-cd-custom-deploy.md#customize-the-workflow-or-pipeline)
-For more details on how to implement these customizations, see [Customize the deployment workflow](ci-cd.md#customize-the-deployment-workflow).
+#### Customize the deployment
+
+Once the workflow or pipeline is triggered, the deployment supports the following scenarios:
+- prioritize content to be deployed before the rest of the repo content
+- exclude content from deployment
+- specify ARM template parameter files
+
+These options are available through a feature of the PowerShell deployment script called from the workflow or pipeline. For more details on how to implement these customizations, see [Customize repository deployments](ci-cd-custom-deploy.md#customize-your-connection-configuration).
## Next steps Get more examples and step by step instructions on deploying Microsoft Sentinel repositories. -- [Sentinel CICD sample repository](https://github.com/SentinelCICD/RepositoriesSampleContent) - [Deploy custom content from your repository](ci-cd.md)
+- [Sentinel CICD sample repository](https://github.com/SentinelCICD/RepositoriesSampleContent)
- [Automate Sentinel integration with DevOps](/azure/architecture/example-scenario/devops/automate-sentinel-integration#microsoft-sentinel-repositories)
sentinel Ci Cd Custom Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd-custom-deploy.md
+
+ Title: Customize repository deployments
+
+description: This article describes how to customize repository deployments for the repositories feature in Microsoft Sentinel.
++ Last updated : 9/15/2022+
+#Customer intent: As a SOC collaborator or MSSP analyst, I want to know how to optimize my source control repositories for continuous integration and continuous delivery (CI/CD). Specifically as an MSSP content manager, I want to know how to deploy one solution to many customer workspaces and still be able to tailor custom content for their environments.
++
+# Customize repository deployments (Public Preview)
+
+There are two primary ways to customize the deployment of your repository content to Microsoft Sentinel workspaces. Each method uses different files and syntax, so consider these examples to get you started.
+
+- Modify the GitHub workflow or DevOps pipeline to customize deployment options such as your connection's deployment trigger, deployment path or usage of smart deployments.
+
+- Utilize the newly introduced configuration file to control the prioritized order of your content deployments, choose to *exclude* specific content files from those deployments, or map parameter files to specific content files.
+
+> [!IMPORTANT]
+>
+> The Microsoft Sentinel **Repositories** feature is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
++
+## Prerequisites and scope
+
+Microsoft Sentinel currently supports connections to GitHub and Azure DevOps repositories. Before connecting your Microsoft Sentinel workspace to your source control repository, make sure that you have:
+
+- An **Owner** role in the resource group that contains your Microsoft Sentinel workspace *or* a combination of **User Access Administrator** and **Sentinel Contributor** roles to create the connection
+- Contributor access to your GitHub or Azure DevOps repository
+- Actions enabled for GitHub and Pipelines enabled for Azure DevOps
+- Ensure custom content files you want to deploy to your workspaces are in relevant [Azure Resource Manager (ARM) templates](../azure-resource-manager/templates/index.yml).
+
+For more information, see [Validate your content](ci-cd-custom-content.md#validate-your-content)
++
+## Customize the workflow or pipeline
+
+The default workflow only deploys content that has been modified since the last deployment based on commits to the repository. But you may want to turn off smart deployments or perform other customizations. For example, you can configure different deployment triggers, or deploy content exclusively from a specific root folder.
+
+Select one of the following tabs depending on your connection type:
+
+# [GitHub](#tab/github)
+
+**To customize your GitHub deployment workflow**:
+
+1. In GitHub, go to your repository and find your workflow in the *.github/workflows* directory.
+
+ The workflow file is the YML file starting with *sentinel-deploy-xxxxx.yml*. Open that file and the workflow name is shown in the first line and has the following default naming convention: `Deploy Content to <workspace-name> [<deployment-id>]`.
+
+ For example: `name: Deploy Content to repositories-demo [xxxxx-dk5d-3s94-4829-9xvnc7391v83a]`
+
+1. Select the pencil button at the top-right of the page to open the file for editing, and then modify the deployment as follows:
+
+ - **To modify the deployment trigger**, update the `on` section in the code, which describes the event that triggers the workflow to run.
+
+ By default, this configuration is set to `on: push`, which means that the workflow is triggered at any push to the connected branch, including both modifications to existing content and additions of new content to the repository. For example:
+
+ ```yml
+ on:
+ push:
+ branches: [ main ]
+ paths:
+ - `**`
+ - `!.github/workflows/**` # this filter prevents other workflow changes from triggering this workflow
+ - `.github/workflows/sentinel-deploy-<deployment-id>.yml`
+ ```
+
+ You may want to change these settings, for example, to schedule the workflow to run periodically, or to combine different workflow events together.
+
+ For more information, see the [GitHub documentation](https://docs.github.com/en/actions/learn-github-actions/events-that-trigger-workflows#configuring-workflow-events) on configuring workflow events.
+
+ - **To disable smart deployments**:
+ The smart deployments behavior is separate from the deployment trigger discussed above. Navigate to the `jobs` section of your workflow. Switch the `smartDeployment` default value from `true` to `false`. This will turn off the smart deployments functionality and all future deployments for this connection will redeploy all the repository's relevant content files to the connected workspace(s) once this change is committed.
+
+ - **To modify the deployment path**:
+
+ In the default configuration shown above for the `on` section, the wildcards (`**`) in the first line in the `paths` section indicate that the entire branch is in the path for the deployment triggers.
+
+ This default configuration means that a deployment workflow is triggered anytime that content is pushed to any part of the branch.
+
+ Later on in the file, the `jobs` section includes the following default configuration: `directory: '${{ github.workspace }}'`. This line indicates that the entire GitHub branch is in the path for the content deployment, without filtering for any folder paths.
+
+ To deploy content from a specific folder path only, add it to both the `paths` and the `directory` configuration. For example, to deploy content only from a root folder named `SentinelContent`, update your code as follows:
+
+ ```yml
+ paths:
+ - `SentinelContent/**`
+ - `!.github/workflows/**` # this filter prevents other workflow changes from triggering this workflow
+ - `.github/workflows/sentinel-deploy-<deployment-id>.yml`
+
+ ...
+ directory: '${{ github.workspace }}/SentinelContent'
+ ```
+
+For more information, see the [GitHub documentation](https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions#onpushpull_requestpaths) on GitHub Actions and editing GitHub workflows.
+
+# [Azure DevOps](#tab/azure-devops)
+
+**To customize your Azure DevOps deployment pipeline**:
+
+1. In Azure DevOps, go to your repository and find your pipeline definition file in the *.sentinel* directory.
+
+ The pipeline name is shown in the first line of the pipeline file, and has the following default naming convention: `Deploy Content to <workspace-name> [<deployment-id>]`.
+
+ For example: `name: Deploy Content to repositories-demo [xxxxx-dk5d-3s94-4829-9xvnc7391v83a]`
+
+1. Select the pencil button at the top-right of the page to open the file for editing, and then modify the deployment as follows:
+
+ - **To modify the deployment trigger**, update the `trigger` section in the code, which describes the event that triggers the workflow to run.
+
+ By default, this configuration is set to detect any push to the connected branch, including both modifications to existing content and additions of new content to the repository.
+
+ Modify this trigger to any available Azure DevOps Triggers, such as a scheduling trigger or a pull request triggers. For more information, see the [Azure DevOps trigger documentation](/azure/devops/pipelines/yaml-schema).
+
+ - **To disable smart deployments**:
+ The smart deployments behavior is separate from the deployment trigger discussed above. Navigate to the `ScriptArguments` section of your pipeline. Switch the `smartDeployment` default value from `true` to `false`. This will turn off the smart deployments functionality and all future deployments for this connection will redeploy all the repository's relevant content files to the connected workspace(s) once this change is committed.
+
+ - **To modify the deployment path**:
+
+ The default configuration for the `trigger` section has the following code, which indicates that the `main` branch is in the path for the deployment triggers:
+
+ ```yml
+ trigger:
+ branches:
+ include:
+ - main
+ ```
+
+ This default configuration means that a deployment pipeline is triggered anytime that content is pushed to any part of the `main` branch.
+
+ To deploy content from a specific folder path only, add the folder name to the `include` section, for the trigger, and the `steps` section, for the deployment path, below as needed.
+
+ For example, to deploy content only from a root folder named `SentinelContent` in your `main` branch, add `include` and `workingDirectory` settings to your code as follows:
+
+ ```yml
+ paths:
+ exclude:
+ - .sentinel/*
+ include:
+ - .sentinel/sentinel-deploy-39d8ekc8-397-5963-49g8-5k63k5953829.yml
+ - SentinelContent
+ ....
+ steps:
+ - task: AzurePowerShell@5
+ inputs:
+ azureSubscription: `Sentinel_Deploy_ServiceConnection_0000000000000000`
+ workingDirectory: `SentinelContent`
+ ```
+
+For more information, see the [Azure DevOps documentation](/azure/devops/pipelines/yaml-schema) on the Azure DevOps YAML schema.
+++
+> [!IMPORTANT]
+> In both GitHub and Azure DevOps, make sure that you keep the trigger path and deployment path directories consistent.
+>
++
+## Customize your connection configuration
+
+The deployment script for repositories supports the usage of a deployment configuration file for each repository branch as of July 2022. The configuration JSON file helps you map parameter files to relevant content files, prioritize specific content in deployments, and exclude specific content from deployments.
++
+1. Create the file *sentinel-deployment.config* at the root of your repository. Adding, deleting, or modifying this configuration file will cause a full deployment of all the content in the repository according to the updated configuration.
+
+ :::image type="content" source="media/ci-cd-custom-deploy/deployment-config.png" alt-text="Screenshot of a repository root directory. The RepositoriesSampleContent is shown with the location of the sentinel-deployment.config file." lightbox="media/ci-cd-custom-deploy/deployment-config.png":::
+
+1. Include JSON structured content in three optional sections, `"prioritizedcontentfiles":`, `"excludecontentfiles":`, and `"parameterfilemappings":`. If no sections are included or the .config file is omitted, the deployment process will still run. Invalid or unrecognized sections will be ignored.
+
+Here's an example of the entire contents of a valid *sentinel-deployment.config* file. This sample can also be found at the [Sentinel CICD repositories sample](https://github.com/SentinelCICD/RepositoriesSampleContent).
+
+```json
+{
+ "prioritizedcontentfiles": [
+ "parsers/Sample/ASimAuthenticationAWSCloudTrail.json",
+ "workbooks/sample/TrendMicroDeepSecurityAttackActivity_ARM.json",
+ "Playbooks/PaloAlto-PAN-OS/PaloAltoCustomConnector/azuredeploy.json"
+ ],
+ "excludecontentfiles": [
+ "Detections/Sample/PaloAlto-PortScanning.json",
+ "parameters"
+ ],
+ "parameterfilemappings": {
+ "879001c8-2181-4374-be7d-72e5dc69bd2b": {
+ "Playbooks/PaloAlto-PAN-OS/Playbooks/PaloAlto-PAN-OS-BlockIP/azuredeploy.json": "parameters/samples/parameter-file-1.json"
+ },
+ "9af71571-7181-4cef-992e-ef3f61506b4e": {
+ "Playbooks/Enrich-SentinelIncident-GreyNoiseCommunity-IP/azuredeploy.json": "path/to/any-parameter-file.json"
+ }
+ },
+ "DummySection": "This shouldn't impact deployment"
+}
+```
+
+> [!NOTE]
+> Don't use the backslash "\\" character in any of the content paths. Use the forward slash "/" instead.
+>
+
+- **To prioritize content files**:
+
+ As the amount of content in your repository grows, deployment times may increase. Add time sensitive content to this section to prioritize its deployment when a trigger occurs.
+
+ Add full path names to the `"prioritizedcontentfiles":` section. Wildcard matching is not supported at this time.
+
+- **To exclude content files**, modify the `"excludecontentfiles":` section with full path names of individual .json deployment files.
+
+- **To map parameters**:
+
+ The deployment script will accept three methods to map parameters. The precedence is determined for each included .json deployment file in your repository as follows:
+
+ :::image type="content" source="media/ci-cd-custom-deploy/deploy-parameter-file-precedence.svg" alt-text="A diagram showing the precedence of parameter file mappings.":::
+
+ 1. Is there a mapping in the sentinel-deployment.config?
+ 1. Is there a workspace parameter file?
+ 1. Is there a default parameter file?
+
+Modifying the mapped parameter file listed in the sentinel-deployment.config will trigger the deployment of its paired content file. Adding or modifying a *.parameters-\<workspaceID\>.json* file or *.parameters.json* file triggers a deployment of that corresponding content file along with the newly modified parameters, unless a higher precedence parameter mappings is in place. Other content files won't be deployed if the smart deployments feature is still enabled.
++
+## Next steps
+
+A sample repository is available with demonstrating the deployment config file and all three parameter mapping methods.
+
+For more information, see:
+
+- [Sentinel CICD repositories sample](https://github.com/SentinelCICD/RepositoriesSampleContent)
+- [Create Resource Manager parameter file](/../../azure/azure-resource-manager/templates/parameter-files.md)
+- [Parameters in ARM templates](/../../azure/azure-resource-manager/templates/parameters.md)
++
sentinel Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/ci-cd.md
Title: Deploy custom content from your repository+ description: This article describes how to create connections with a GitHub or Azure DevOps repository where you can manage your custom content and deploy it to Microsoft Sentinel.
This procedure describes how to connect a GitHub or Azure DevOps repository to y
Each connection can support multiple types of custom content, including analytics rules, automation rules, hunting queries, parsers, playbooks, and workbooks. For more information, see [About Microsoft Sentinel content and solutions](sentinel-solutions.md).
-**To create your connection**:
+**Create your connection**:
-1. Make sure that you're signed into your source control app with the credentials you want to use for your connection. If you're currently signed in using different credentials, sign out first.
+1. Make sure that you're signed into your source control app with the credentials you want to use for your connection. If you're currently signed in using different credentials, sign out first.
1. In Microsoft Sentinel, on the left under **Content management**, select **Repositories**.
Each connection can support multiple types of custom content, including analytic
- Both parsers and hunting queries use the **Saved Searches** API to deploy content to Microsoft Sentinel. If you select one of these content types, and also have content of the other type in your branch, both content types are deployed.
- - For all other content types, selecting a content type in the **Create a new connection** pane deploys only that content to Microsoft Sentinel. Content of other types is not deployed.
+ - For all other content types, selecting a content type in the **Create a new connection** pane deploys only that content to Microsoft Sentinel. Content of other types isn't deployed.
1. Select **Create** to create your connection. For example:
Each connection can support multiple types of custom content, including analytic
> Due to cross-tenant limitations, if you are creating a connection as a [guest user](../active-directory/external-identities/what-is-b2b.md) on the workspace, your Azure DevOps URL won't appear in the dropdown. Enter it manually instead. >
- You are automatically authorized to Azure DevOps using your current Azure credentials. To ensure valid connectivity, [verify that you've authorized to the same Azure DevOps account](https://aex.dev.azure.com/) that you're connecting to from Microsoft Sentinel or use an InPrivate browser window to create your connection.
+ You're automatically authorized to Azure DevOps using your current Azure credentials. To ensure valid connectivity, [verify that you've authorized to the same Azure DevOps organization](https://aex.dev.azure.com/) that you're connecting to from Microsoft Sentinel or use an InPrivate browser window to create your connection.
1. In Microsoft Sentinel, from the dropdown lists that appear, select your **Organization**, **Project**, **Repository**, **Branch**, and **Content Types**. - Both parsers and hunting queries use the **Saved Searches** API to deploy content to Microsoft Sentinel. If you select one of these content types, and also have content of the other type in your branch, both content types are deployed.
- - For all other content types, selecting a content type in the **Create a new connection** pane deploys only that content to Microsoft Sentinel. Content of other types is not deployed.
+ - For all other content types, selecting a content type in the **Create a new connection** pane deploys only that content to Microsoft Sentinel. Content of other types isn't deployed.
1. Select **Create** to create your connection. For example:
After the deployment is complete:
:::image type="content" source="media/ci-cd/deployment-logs-status.png" alt-text="Screenshot of a GitHub repository connection's deployment logs.":::
-### Customize the deployment workflow
-
-The default workflow only deploys content that has been modified since the last deployment based on commits to the repository. But you may want to turn off smart deployments or perform other customizations. For example, you can configure different deployment triggers, or deploy content exclusively from a specific root folder.
-
-Select one of the following tabs depending on your connection type:
-
-# [GitHub](#tab/github)
-
-**To customize your GitHub deployment workflow**:
-
-1. In GitHub, go to your repository and find your workflow in the `.github/workflows` directory.
-
- The workflow file is the YML file starting with `sentinel-deploy-xxxxx.yml`. Open that file and the workflow name is shown in the first line and has the following default naming convention: `Deploy Content to <workspace-name> [<deployment-id>]`.
-
- For example: `name: Deploy Content to repositories-demo [xxxxx-dk5d-3s94-4829-9xvnc7391v83a]`
-
-1. Select the pencil button at the top-right of the page to open the file for editing, and then modify the deployment as follows:
-
- - **To modify the deployment trigger**, update the `on` section in the code, which describes the event that triggers the workflow to run.
-
- By default, this configuration is set to `on: push`, which means that the workflow is triggered at any push to the connected branch, including both modifications to existing content and additions of new content to the repository. For example:
-
- ```yml
- on:
- push:
- branches: [ main ]
- paths:
- - `**`
- - `!.github/workflows/**` # this filter prevents other workflow changes from triggering this workflow
- - `.github/workflows/sentinel-deploy-<deployment-id>.yml`
- ```
-
- You may want to change these settings, for example, to schedule the workflow to run periodically, or to combine different workflow events together.
-
- For more information, see the [GitHub documentation](https://docs.github.com/en/actions/learn-github-actions/events-that-trigger-workflows#configuring-workflow-events) on configuring workflow events.
-
- - **To disable smart deployments**:
- The smart deployments behavior is separate from the deployment trigger discussed above. Navigate to the `jobs` section of your workflow. Switch the `smartDeployment` default value from `true` to `false`. This will turn off the smart deployments functionality and all future deployments for this connection will redeploy all the repository's relevant content files to the connected workspace(s) once this change is committed.
-
- - **To modify the deployment path**:
-
- In the default configuration shown above for the `on` section, the wildcards (`**`) in the first line in the `paths` section indicate that the entire branch is in the path for the deployment triggers.
-
- This default configuration means that a deployment workflow is triggered anytime that content is pushed to any part of the branch.
-
- Later on in the file, the `jobs` section includes the following default configuration: `directory: '${{ github.workspace }}'`. This line indicates that the entire GitHub branch is in the path for the content deployment, without filtering for any folder paths.
-
- To deploy content from a specific folder path only, add it to both the `paths` and the `directory` configuration. For example, to deploy content only from a root folder named `SentinelContent`, update your code as follows:
-
- ```yml
- paths:
- - `SentinelContent/**`
- - `!.github/workflows/**` # this filter prevents other workflow changes from triggering this workflow
- - `.github/workflows/sentinel-deploy-<deployment-id>.yml`
-
- ...
- directory: '${{ github.workspace }}/SentinelContent'
- ```
-
-For more information, see the [GitHub documentation](https://docs.github.com/en/actions/learn-github-actions/workflow-syntax-for-github-actions#onpushpull_requestpaths) on GitHub Actions and editing GitHub workflows.
-
-# [Azure DevOps](#tab/azure-devops)
-
-**To customize your Azure DevOps deployment pipeline**:
-
-1. In Azure DevOps, go to your repository and find your pipeline definition file in the `.sentinel` directory.
-
- The pipeline name is shown in the first line of the pipeline file, and has the following default naming convention: `Deploy Content to <workspace-name> [<deployment-id>]`.
-
- For example: `name: Deploy Content to repositories-demo [xxxxx-dk5d-3s94-4829-9xvnc7391v83a]`
-
-1. Select the pencil button at the top-right of the page to open the file for editing, and then modify the deployment as follows:
-
- - **To modify the deployment trigger**, update the `trigger` section in the code, which describes the event that triggers the workflow to run.
-
- By default, this configuration is set to detect any push to the connected branch, including both modifications to existing content and additions of new content to the repository.
-
- Modify this trigger to any available Azure DevOps Triggers, such as to scheduling or pull request triggers. For more information, see the [Azure DevOps trigger documentation](/azure/devops/pipelines/yaml-schema).
-
- - **To disable smart deployments**:
- The smart deployments behavior is separate from the deployment trigger discussed above. Navigate to the `ScriptArguments` section of your pipeline. Switch the `smartDeployment` default value from `true` to `false`. This will turn off the smart deployments functionality and all future deployments for this connection will redeploy all the repository's relevant content files to the connected workspace(s) once this change is committed.
-
- - **To modify the deployment path**:
-
- The default configuration for the `trigger` section has the following code, which indicates that the `main` branch is in the path for the deployment triggers:
-
- ```yml
- trigger:
- branches:
- include:
- - main
- ```
-
- This default configuration means that a deployment pipeline is triggered anytime that content is pushed to any part of the `main` branch.
-
- To deploy content from a specific folder path only, add the folder name to the `include` section, for the trigger, and the `steps` section, for the deployment path, below as needed.
-
- For example, to deploy content only from a root folder named `SentinelContent` in your `main` branch, add `include` and `workingDirectory` settings to your code as follows:
-
- ```yml
- paths:
- exclude:
- - .sentinel/*
- include:
- - .sentinel/sentinel-deploy-39d8ekc8-397-5963-49g8-5k63k5953829.yml
- - SentinelContent
- ....
- steps:
- - task: AzurePowerShell@5
- inputs:
- azureSubscription: `Sentinel_Deploy_ServiceConnection_0000000000000000`
- workingDirectory: `SentinelContent`
- ```
-
-For more information, see the [Azure DevOps documentation](/azure/devops/pipelines/yaml-schema) on the Azure DevOps YAML schema.
---
-> [!IMPORTANT]
-> In both GitHub and Azure DevOps, make sure that you keep the trigger path and deployment path directories consistent.
->
+The default workflow only deploys content that has been modified since the last deployment based on commits to the repository. But you may want to turn off smart deployments or perform other customizations. For example, you can configure different deployment triggers, or deploy content exclusively from a specific root folder. To learn more about how this is done visit [customize repository deployments](ci-cd-custom-deploy.md).
## Edit content
-After you've successfully created a connection to your source control repository, anytime content in that repository is modified or added, the modified content is deployed to all connected Microsoft Sentinel workspaces.
-
-We recommend that you edit any content stored in a connected repository *only* in the repository, and not in Microsoft Sentinel. For example, to make changes to your analytics rules, do so directly in GitHub or Azure DevOps.
+When you successfully create a connection to your source control repository, your content is deployed to Sentinel. We recommend that you edit content stored in a connected repository *only* in the repository, and not in Microsoft Sentinel. For example, to make changes to your analytics rules, do so directly in GitHub or Azure DevOps.
-If you have edited the content in Microsoft Sentinel, make sure to export it to your source control repository to prevent your changes from being overwritten the next time the repository content is deployed to your workspace.
+If you edit the content in Microsoft Sentinel instead, make sure to export it to your source control repository to prevent your changes from being overwritten the next time the repository content is deployed to your workspace.
## Delete content
This procedure describes how to remove the connection to a source control reposi
1. In the grid, select the connection you want to remove, and then select **Delete**. 1. Select **Yes** to confirm the deletion.
-After you've removed your connection, content that was previously deployed via the connection remains in your Microsoft Sentinel workspace. Content added to the repository after removing the connection is not deployed.
+After you've removed your connection, content that was previously deployed via the connection remains in your Microsoft Sentinel workspace. Content added to the repository after removing the connection isn't deployed.
> [!TIP] > If you encounter issues or an error message when deleting your connection, we recommend that you check your source control to confirm that the GitHub workflow or Azure DevOps pipeline associated with the connection was deleted.
Use your custom content in Microsoft Sentinel in the same way that you'd use out
For more information, see:
+- [Customize repository deployments](ci-cd-custom-deploy.md)
- [Discover and deploy Microsoft Sentinel solutions (Public preview)](sentinel-solutions-deploy.md)-- [Microsoft Sentinel data connectors](connect-data-sources.md)-- [Advanced Security Information Model (ASIM) parsers (Public preview)](normalization-parsers-overview.md)-- [Visualize collected data](get-visibility.md)-- [Create custom analytics rules to detect threats](detect-threats-custom.md)-- [Hunt for threats with Microsoft Sentinel](hunting.md)-- [Use Microsoft Sentinel watchlists](watchlists.md)-- [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md)
+- [Microsoft Sentinel data connectors](connect-data-sources.md)
sentinel Migration Ingestion Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration-ingestion-tool.md
Review the Azure Data Factory (ADF) and Azure Synapse methods, which are better
To use the Copy activity in Azure Data Factory (ADF) or Synapse pipelines: 1. Create and configure a self-hosted integration runtime. This component is responsible for copying the data from your on-premises host. 1. Create linked services for the source data store ([filesystem](../data-factory/connector-file-system.md?tabs=data-factory#create-a-file-system-linked-service-using-ui) and the sink data store [blob storage](../data-factory/connector-azure-blob-storage.md?tabs=data-factory#create-an-azure-blob-storage-linked-service-using-ui).
-3. To copy the data, use the [Copy data tool](../data-factory/quickstart-create-data-factory-copy-data-tool.md). Alternatively, you can use method such as PowerShell, Azure portal, a .NET SDK, and so on.
+3. To copy the data, use the [Copy data tool](../data-factory/quickstart-hello-world-copy-data-tool.md). Alternatively, you can use method such as PowerShell, Azure portal, a .NET SDK, and so on.
### AzCopy
sentinel Skill Up Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/skill-up-resources.md
After it's imported, [threat intelligence](understand-threat-intelligence.md) is
* View and manage the imported threat intelligence in **Logs** in the new **Threat Intelligence** area of Microsoft Sentinel.
-* Use the [built-in threat intelligence analytics rule templates](understand-threat-intelligence.md#detect-threats-with-threat-indicator-based-analytics) to generate security alerts and incidents by using your imported threat intelligence.
+* Use the [built-in threat intelligence analytics rule templates](understand-threat-intelligence.md#detect-threats-with-threat-indicator-analytics) to generate security alerts and incidents by using your imported threat intelligence.
* [Visualize key information about your threat intelligence](understand-threat-intelligence.md#view-and-manage-your-threat-indicators) in Microsoft Sentinel by using the threat intelligence workbook.
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md
Title: Threat intelligence integration in Microsoft Sentinel | Microsoft Docs
+ Title: Threat intelligence integration in Microsoft Sentinel
description: Learn about the different ways threat intelligence feeds are integrated with and used by Microsoft Sentinel.-- Previously updated : 11/09/2021--++ Last updated : 9/26/2022+ # Threat intelligence integration in Microsoft Sentinel - Microsoft Sentinel gives you a few different ways to [use threat intelligence feeds](work-with-threat-indicators.md) to enhance your security analysts' ability to detect and prioritize known threats. You can use one of many available integrated [threat intelligence platform (TIP) products](connect-threat-intelligence-tip.md), you can [connect to TAXII servers](connect-threat-intelligence-taxii.md) to take advantage of any STIX-compatible threat intelligence source, and you can also make use of any custom solutions that can communicate directly with the [Microsoft Graph Security tiIndicators API](/graph/api/resources/tiindicator).
You can use one of many available integrated [threat intelligence platform (TIP)
You can also connect to threat intelligence sources from playbooks, in order to enrich incidents with TI information that can help direct investigation and response actions. > [!TIP]
-> If you have multiple workspaces in the same tenant, such as for [Managed Service Providers (MSSPs)](mssp-protect-intellectual-property.md), it may be more cost effective to connect threat indicators only to the centralized workspace.
+> If you have multiple workspaces in the same tenant, such as for [Managed Security Service Providers (MSSPs)](mssp-protect-intellectual-property.md), it may be more cost effective to connect threat indicators only to the centralized workspace.
> > When you have the same set of threat indicators imported into each separate workspace, you can run cross-workspace queries to aggregate threat indicators across your workspaces. Correlate them within your MSSP incident detection, investigation, and hunting experience. >
sentinel Understand Threat Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/understand-threat-intelligence.md
Title: Understand threat intelligence in Microsoft Sentinel | Microsoft Docs
+ Title: Understand threat intelligence in Microsoft Sentinel
description: Understand how threat intelligence feeds are connected to, managed, and used in Microsoft Sentinel to analyze data, detect threats, and enrich alerts.-+ Previously updated : 11/09/2021-- Last updated : 9/26/2022+ # Understand threat intelligence in Microsoft Sentinel
+Microsoft Sentinel is a cloud native Security Information and Event Management (SIEM) solution with the ability to quickly pull threat intelligence from numerous sources.
## Introduction to threat intelligence -
-Cyber threat intelligence (CTI) is information describing known existing or potential threats to systems and users. This type of information takes many forms, from written reports detailing a particular threat actorΓÇÖs motivations, infrastructure, and techniques, to specific observations of IP addresses, domains, file hashes, and other artifacts associated with known cyber threats. CTI is used by organizations to provide essential context to unusual activity, so that security personnel can quickly take action to protect their people, information, and other assets. CTI can be sourced from many places, such as open-source data feeds, threat intelligence-sharing communities, commercial intelligence feeds, and local intelligence gathered in the course of security investigations within an organization.
+Cyber threat intelligence (CTI) is information describing existing or potential threats to systems and users. This intelligence takes many forms, from written reports detailing a particular threat actor's motivations, infrastructure, and techniques, to specific observations of IP addresses, domains, file hashes, and other artifacts associated with known cyber threats. CTI is used by organizations to provide essential context to unusual activity, so security personnel can quickly take action to protect their people, information, and assets. CTI can be sourced from many places, such as open-source data feeds, threat intelligence-sharing communities, commercial intelligence feeds, and local intelligence gathered in the course of security investigations within an organization.
-Within a Security Information and Event Management (SIEM) solution like Microsoft Sentinel, the most commonly used form of CTI is threat indicators, also known as Indicators of Compromise or IoCs. Threat indicators are data that associate observed artifacts such as URLs, file hashes, or IP addresses with known threat activity such as phishing, botnets, or malware. This form of threat intelligence is often called *tactical threat intelligence* because it can be applied to security products and automation in large scale to detect potential threats to an organization and protect against them. In Microsoft Sentinel, you can use threat indicators to help detect malicious activity observed in your environment and provide context to security investigators to help inform response decisions.
+For SIEM solutions like Microsoft Sentinel, the most common forms of CTI are threat indicators, also known as Indicators of Compromise (IoC) or Indicators of Attack (IoA). Threat indicators are data that associate observed artifacts such as URLs, file hashes, or IP addresses with known threat activity such as phishing, botnets, or malware. This form of threat intelligence is often called *tactical threat intelligence* because it can be applied to security products and automation in large scale to detect potential threats to an organization and protect against them. In Microsoft Sentinel, you can use threat indicators to help detect malicious activity observed in your environment and provide context to security investigators to help inform response decisions.
Integrate threat intelligence (TI) into Microsoft Sentinel through the following activities:
Integrate threat intelligence (TI) into Microsoft Sentinel through the following
Microsoft enriches all imported threat intelligence indicators with [GeoLocation and WhoIs data](#view-your-geolocation-and-whois-data-enrichments-public-preview), which is displayed together with other indicator details.
-> [!TIP]
-> Threat Intelligence also provides useful context within other Microsoft Sentinel experiences such as **Hunting** and **Notebooks**. For more information, see [Jupyter Notebooks in Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/using-threat-intelligence-in-your-jupyter-notebooks/ba-p/860239) and [Tutorial: Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel](notebook-get-started.md).
->
+Threat Intelligence also provides useful context within other Microsoft Sentinel experiences such as **Hunting** and **Notebooks**. For more information, see [Jupyter Notebooks in Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-sentinel/using-threat-intelligence-in-your-jupyter-notebooks/ba-p/860239) and [Tutorial: Get started with Jupyter notebooks and MSTICPy in Microsoft Sentinel](notebook-get-started.md).
+ ## Import threat intelligence with data connectors
For more details on viewing and managing your threat indicators, see [Work with
### View your GeoLocation and WhoIs data enrichments (Public preview)
-Microsoft enriches each indicator with extra GeoLocation and WhoIs data, providing more context for investigations where the selected indicator of compromise (IOC) is found.
-
-You can view GeoLocation and WhoIs data on the **Threat Intelligence** pane for each indicator of compromise that you've imported into Microsoft Sentinel.
-
-For example, use GeoLocation data to find details like *Organization* or *Country* for the indicator, and WhoIs data to find data like *Registrar* and *Record creation* data.
+Microsoft enriches IP and domain indicators with extra GeoLocation and WhoIs data, providing more context for investigations where the selected indicator of compromise (IOC) is found.
-## Detect threats with threat indicator-based analytics
+You can view GeoLocation and WhoIs data on the **Threat Intelligence** pane for each of those types of threat indicator you've imported into Microsoft Sentinel.
-The most important use case for threat indicators in SIEM solutions like Microsoft Sentinel is to power analytics rules for threat detection. These indicator-based rules compare raw events from your data sources against your threat indicators to detect security threats in your organization. In Microsoft Sentinel **Analytics**, you create analytics rules that run on a schedule and generate security alerts. The rules are driven by queries, along with configurations that determine how often the rule should run, what kind of query results should generate security alerts and incidents, and which if any automations to trigger in response.
+For example, use GeoLocation data to find details like *Organization* or *Country* for an IP indicator, and WhoIs data to find data like *Registrar* and *Record creation* data from a domain indicator.
-While you can always create new analytics rules from scratch, Microsoft Sentinel provides a set of built-in rule templates, created by Microsoft security engineers, that you can use as-is or modify to meet your needs. You can readily identify the rule templates that use threat indicators, as they are all titled beginning with "**TI map**…". All these rule templates operate similarly, with the only difference being which type of threat indicators are used (domain, email, file hash, IP address, or URL) and which event type to match against. Each template lists the required data sources needed for the rule to function, so you can see at a glance if you have the necessary events already imported in Microsoft Sentinel. When you edit and save an existing rule template or create a new rule, it is enabled by default.
+## Detect threats with threat indicator analytics
-You can find your enabled rule in the **Active rules** tab of the **Analytics** section of Microsoft Sentinel. You can edit, enable, disable, duplicate or delete the active rule from there. The new rule runs immediately upon activation, and from then on will run on its defined schedule.
+The most important use case for threat indicators in SIEM solutions like Microsoft Sentinel is to power analytics rules for threat detection. These indicator-based rules compare raw events from your data sources against your threat indicators to detect security threats in your organization. In Microsoft Sentinel **Analytics**, you create analytics rules that run on a schedule and generate security alerts. The rules are driven by queries, along with configurations that determine how often the rule should run, what kind of query results should generate security alerts and incidents, and optionally trigger an automated response.
-According to the default settings, each time the rule runs on its schedule, any results found will generate a security alert. Security alerts in Microsoft Sentinel can be viewed in the **Logs** section of Microsoft Sentinel, in the **SecurityAlert** table under the **Microsoft Sentinel** group.
+While you can always create new analytics rules from scratch, Microsoft Sentinel provides a set of built-in rule templates, created by Microsoft security engineers, to leverage your threat indicators. These built-in rule templates are based on the type of threat indicators (domain, email, file hash, IP address, or URL) and data source events you want to match. Each template lists the required sources needed for the rule to function, so you can see at a glance if you have the necessary events already imported in Microsoft Sentinel.
-In Microsoft Sentinel, the alerts generated from analytics rules also generate security incidents which can be found in **Incidents** under **Threat Management** on the Microsoft Sentinel menu. Incidents are what your security operations teams will triage and investigate to determine the appropriate response actions. You can find detailed information in this [Tutorial: Investigate incidents with Microsoft Sentinel](./investigate-cases.md).
+By default, when these built-in rules are triggered, an alert will be created. In Microsoft Sentinel, the alerts generated from analytics rules also generate security incidents which can be found in **Incidents** under **Threat Management** on the Microsoft Sentinel menu. Incidents are what your security operations teams will triage and investigate to determine the appropriate response actions. You can find detailed information in this [Tutorial: Investigate incidents with Microsoft Sentinel](./investigate-cases.md).
-For more details on using threat indicators in your analytics rules, see [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md#detect-threats-with-threat-indicator-based-analytics).
+For more details on using threat indicators in your analytics rules, see [Use threat intelligence to detect threats](use-threat-indicators-in-analytics-rules.md).
## Workbooks provide insights about your threat intelligence
In this document, you learned about the threat intelligence capabilities of Micr
- See which [TIP platforms, TAXII feeds, and enrichments](threat-intelligence-integration.md) can be readily integrated with Microsoft Sentinel. - [Work with threat indicators](work-with-threat-indicators.md) throughout the Microsoft Sentinel experience. - Detect threats with [built-in](./detect-threats-built-in.md) or [custom](./detect-threats-custom.md) analytics rules in Microsoft Sentinel-- [Investigate incidents](./investigate-cases.md) in Microsoft Sentinel.
+- [Investigate incidents](./investigate-cases.md) in Microsoft Sentinel.
sentinel Use Matching Analytics To Detect Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/use-matching-analytics-to-detect-threats.md
+
+ Title: Use matching analytics to detect threats
+
+description: This article explains how to detect threats with Microsoft generated threat intelligence in Microsoft Sentinel.
++ Last updated : 9/26/2022+++
+# Use matching analytics to detect threats
+
+Take advantage of threat intelligence produced by Microsoft to generate high fidelity alerts and incidents with the **Microsoft Threat Intelligence Analytics** rule. This rule will match Common Event Format (CEF) logs, Syslog data or Windows DNS events with domain, IP and URL threat indicators.
+
+> [!IMPORTANT]
+> Matching analytics is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Prerequisites
+
+One or more of the following data sources must be connected:
+
+- Common Event Format (CEF)
+- DNS (Preview)
+- Syslog
++
+## Configure the matching analytics rule
+
+Matching analytics is configured when you enable the **Microsoft Threat Intelligence Analytics** rule.
+
+1. Click the **Analytics** menu from the **Configuration** section.
+
+1. Select the **Rule templates** menu tab.
+
+1. In the search window type *threat intelligence*.
+
+1. Select the **Microsoft Threat Intelligence Analytics** rule template.
+
+1. Click **Create rule**. The rule details are read only, and the default status of the rule is enabled.
+
+1. Click **Review** > **Create**.
++
+Alerts are grouped on a per-observable basis. For example, all alerts generated in a 24-hour time period that match the `contoso.com` domain are grouped into a single incident with the appropriate severity.
++
+## Data sources and indicators
+
+Microsoft Threat Intelligence Analytics matches your logs with domain, IP and URL indicators in the following way:
+
+- **CEF** logs ingested into the Log Analytics **CommonSecurityLog** table will match URL and domain indicators if populated in the `RequestURL` field, and IPv4 indicators in the `DestinationIP` field.
+
+- Windows **DNS** logs where event `SubType == "LookupQuery"` ingested into the **DnsEvents** table will match domain indicators populated in the `Name` field, and IPv4 indicators in the `IPAddresses` field.
+
+- **Syslog** events where `Facility == "cron"` ingested into the **Syslog** table will match domain and IPv4 indicators directly from the `SyslogMessage` field.
++
+## Triage an incident generated by matching analytics
+
+If Microsoft's analytics finds a match, any alerts generated are grouped into incidents.
+
+Use the following steps to triage through the incidents generated by the **Microsoft Threat Intelligence Analytics** rule:
+
+1. In the Microsoft Sentinel workspace where you've enabled the **Microsoft Threat Intelligence Analytics** rule, select **Incidents** and search for **Microsoft Threat Intelligence Analytics**.
+
+ Any incidents found are shown in the grid.
+
+1. Select **View full details** to view entities and other details about the incident, such as specific alerts.
+
+ For example:
+
+ :::image type="content" source="media/work-with-threat-indicators/matching-analytics.png" alt-text="Screenshot of incident generated by matching analytics with details pane.":::
+
+When a match is found, the indicator is also published to the Log Analytics **ThreatIntelligenceIndicators**, and displayed in the **Threat Intelligence** page. For any indicators published from this rule, the source is defined as **Microsoft Threat Intelligence Analytics**.
+
+For example, in the **ThreatIntelligenceIndicators** log:
++
+In the **Threat Intelligence** page:
++
+## Get additional context from Microsoft Defender Threat Intelligence
+
+Part of the Microsoft Threat Intelligence available through matching analytics is sourced from Microsoft Defender Threat Intelligence (MDTI). Along with high fidelity alerts and incidents, MDTI indicators include the link to a reference article in their community portal.
++
+For more information, see the [MDTI portal](https://ti.defender.microsoft.com).
+
+## Next steps
+
+In this article, you learned how to connect threat intelligence produced by Microsoft to generate alerts and incidents. For more information about threat intelligence in Microsoft Sentinel, see the following articles:
+
+- [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md).
+- Connect Microsoft Sentinel to [STIX/TAXII threat intelligence feeds](./connect-threat-intelligence-taxii.md).
+- [Connect threat intelligence platforms](./connect-threat-intelligence-tip.md) to Microsoft Sentinel.
+- See which [TIP platforms, TAXII feeds, and enrichments](threat-intelligence-integration.md) can be readily integrated with Microsoft Sentinel.
sentinel Use Threat Indicators In Analytics Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/use-threat-indicators-in-analytics-rules.md
+
+ Title: Use threat indicators in analytics rules
+
+description: This article explains how to generate alerts and incidents with threat intelligence indicators in Microsoft Sentinel.
++ Last updated : 8/30/2022+++
+# Use threat indicators in analytics rules
+
+Power your analytics rules with your threat indicators to automatically generate alerts based on the threat intelligence you've integrated.
+
+## Prerequisites
+
+- Threat indicators. These can be from threat intelligence feeds, threat intelligence platforms, bulk import from a flat file, or manual input.
+
+- Data sources. Events from your data connectors must be flowing to your Sentinel workspace.
+
+- An analytics rule of the format, "*TI map*..." that can map the threat indicators you have with the events you've ingested.
++
+## Configure a rule to generate security alerts
+
+Below is an example of how to enable and configure a rule to generate security alerts using the threat indicators you've imported into Microsoft Sentinel. For this example, use the rule template called **TI map IP entity to AzureActivity**. This rule will match any IP address-type threat indicator with all your Azure Activity events. When a match is found, an **alert** will be generated along with a corresponding **incident** for investigation by your security operations team. This particular analytics rule requires the **Azure Activity** data connector (to import your Azure subscription-level events), and one or both of the **Threat Intelligence** data connectors (to import threat indicators). This rule will also trigger from imported indicators or manually created ones.
+
+1. From the [Azure portal](https://portal.azure.com/), navigate to the **Microsoft Sentinel** service.
+
+1. Choose the **workspace** to which you imported threat indicators using the **Threat Intelligence** data connectors and Azure activity data using the **Azure Activity** data connector.
+
+1. Select **Analytics** from the **Configuration** section of the Microsoft Sentinel menu.
+
+1. Select the **Rule templates** tab to see the list of available analytics rule templates.
+
+1. Find the rule titled **TI map IP entity to AzureActivity** and ensure you have connected all the required data sources as shown below.
+
+ :::image type="content" source="media/work-with-threat-indicators/threat-intel-required-data-sources.png" alt-text="Screenshot of required data sources for the TI map IP entity to AzureActivity analytics rule.":::
+
+1. Select the **TI map IP entity to AzureActivity** rule and then select **Create rule** to open a rule configuration wizard. Configure the settings in the wizard and then select **Next: Set rule logic >**.
+
+ :::image type="content" source="media/work-with-threat-indicators/threat-intel-create-analytics-rule.png" alt-text="Screenshot of the create analytics rule configuration wizard.":::
+
+1. The rule logic portion of the wizard has been pre-populated with the following items:
+
+ - The query that will be used in the rule.
+
+ - Entity mappings, which tell Microsoft Sentinel how to recognize entities like Accounts, IP addresses, and URLs, so that **incidents** and **investigations** understand how to work with the data in any security alerts generated by this rule.
+
+ - The schedule to run this rule.
+
+ - The number of query results needed before a security alert is generated.
+
+ The default settings in the template are:
+
+ - Run once an hour.
+
+ - Match any IP address threat indicators from the **ThreatIntelligenceIndicator** table with any IP address found in the last one hour of events from the **AzureActivity** table.
+
+ - Generate a security alert if the query results are greater than zero, meaning if any matches are found.
+
+ - The rule is enabled.
+
+ You can leave the default settings or change them to meet your requirements, and you can define incident-generation settings on the **Incident settings** tab. For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md). When you are finished, select the **Automated response** tab.
+
+1. Configure any automation youΓÇÖd like to trigger when a security alert is generated from this analytics rule. Automation in Microsoft Sentinel is done using combinations of **automation rules** and **playbooks** powered by Azure Logic Apps. To learn more, see this [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](./tutorial-respond-threats-playbook.md). When finished, select the **Next: Review >** button to continue.
+
+1. When you see the message that the rule validation has passed, select the **Create** button and you are finished.
+
+You can find your enabled rules in the **Active rules** tab of the **Analytics** section of Microsoft Sentinel. You can edit, enable, disable, duplicate, or delete the active rule from there. The new rule runs immediately upon activation, and from then on will run on its defined schedule.
+
+According to the default settings, each time the rule runs on its schedule, any results found will generate a security alert. Security alerts in Microsoft Sentinel can be viewed in the **Logs** section of Microsoft Sentinel, in the **SecurityAlert** table under the **Microsoft Sentinel** group.
+
+In Microsoft Sentinel, the alerts generated from analytics rules also generate security incidents, which can be found in **Incidents** under **Threat Management** on the Microsoft Sentinel menu. Incidents are what your security operations teams will triage and investigate to determine the appropriate response actions. You can find detailed information in this [Tutorial: Investigate incidents with Microsoft Sentinel](./investigate-cases.md).
+
+Since analytic rules constrain lookups beyond 14 days, Microsoft Sentinel refreshes indicators every 12 days to make sure they are available for matching purposes through the analytic rules.
+
+## Next steps
+
+In this article, you learned how to use threat intelligence indicators to detect threats. For more about threat intelligence in Microsoft Sentinel, see the following articles:
+
+- [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md).
+- Connect Microsoft Sentinel to [STIX/TAXII threat intelligence feeds](./connect-threat-intelligence-taxii.md).
+- [Connect threat intelligence platforms](./connect-threat-intelligence-tip.md) to Microsoft Sentinel.
+- See which [TIP platforms, TAXII feeds, and enrichments](threat-intelligence-integration.md) can be readily integrated with Microsoft Sentinel.
sentinel Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new-archive.md
The **Microsoft Threat Intelligence Matching Analytics** rule currently matches
- [DNS](./data-connectors-reference.md#windows-dns-server-preview) - [Syslog](connect-syslog.md)
-For more information, see [Detect threats using matching analytics (Public preview)](work-with-threat-indicators.md#detect-threats-using-matching-analytics-public-preview).
+For more information, see [Detect threats using matching analytics (Public preview)](use-matching-analytics-to-detect-threats.md).
### Use Azure AD data with Azure Sentinel's IdentityInfo table (Public preview)
sentinel Work With Threat Indicators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/work-with-threat-indicators.md
Title: Work with threat indicators in Microsoft Sentinel | Microsoft Docs
-description: This article explains how to view, create, manage, visualize, and detect threats with threat intelligence indicators in Microsoft Sentinel.
-
+ Title: Work with threat indicators in Microsoft Sentinel
+description: This article explains how to view, create, manage, and visualize threat intelligence indicators in Microsoft Sentinel.
+ Previously updated : 11/09/2021-- Last updated : 8/30/2022+ # Work with threat indicators in Microsoft Sentinel
Tagging threat indicators is an easy way to group them together to make them eas
Microsoft Sentinel also allows you to edit indicators, whether they've been created directly in Microsoft Sentinel, or come from partner sources, like TIP and TAXII servers. For indicators created in Microsoft Sentinel, all fields are editable. For indicators coming from partner sources, only specific fields are editable, including tags, *Expiration date*, *Confidence*, and *Revoked*.
-## Detect threats with threat indicator-based analytics
-
-The most important use case for threat indicators in SIEM solutions like Microsoft Sentinel is to power threat detection analytics rules. These indicator-based rules compare raw events from your data sources against your threat indicators to determine the presence of security threats in your organization. In Microsoft Sentinel **Analytics**, you create analytics rules that run on a scheduled basis and generate security alerts. The rules are driven by queries, along with configurations that determine how often the rule should run, what kind of query results should generate security alerts and incidents, and which automations to trigger in response.
-
-While you can always create new analytics rules from scratch, Microsoft Sentinel provides a set of built-in rule templates, created by Microsoft security engineers, that you can use as-is or modify to meet your needs. You can readily identify the rule templates that use threat indicators, as they are all titled beginning with "*TI map*…". All these rule templates operate similarly, with the only difference being which type of threat indicators are used (domain, email, file hash, IP address, or URL) and which event type to match against. Each template lists the required data sources needed for the rule to function, so you can see at a glance if you have the necessary events already imported in Microsoft Sentinel. When you edit and save an existing rule template or create a new rule, it is enabled by default.
-
-### Configure a rule to generate security alerts
-
-Below is an example of how to enable and configure a rule to generate security alerts using the threat indicators youΓÇÖve imported into Microsoft Sentinel. For this example, use the rule template called **TI map IP entity to AzureActivity**. This rule will match any IP address-type threat indicator with all your Azure Activity events. When a match is found, an **alert** will be generated, and a corresponding **incident** for investigation by your security operations team. This analytics rule will operate successfully only if you have enabled one or both of the **Threat Intelligence** data connectors (to import threat indicators) and the **Azure Activity** data connector (to import your Azure subscription-level events).
-
-1. From the [Azure portal](https://portal.azure.com/), navigate to the **Microsoft Sentinel** service.
-
-1. Choose the **workspace** to which you imported threat indicators using the **Threat Intelligence** data connectors and Azure activity data using the **Azure Activity** data connector.
-
-1. Select **Analytics** from the **Configuration** section of the Microsoft Sentinel menu.
-
-1. Select the **Rule templates** tab to see the list of available analytics rule templates.
-
-1. Find the rule titled **TI map IP entity to AzureActivity** and ensure you have connected all the required data sources as shown below.
-
- :::image type="content" source="media/work-with-threat-indicators/threat-intel-required-data-sources.png" alt-text="Required data sources":::
-
-1. Select the **TI map IP entity to AzureActivity** rule and then select **Create rule** to open a rule configuration wizard. Configure the settings in the wizard and then select **Next: Set rule logic >**.
-
- :::image type="content" source="media/work-with-threat-indicators/threat-intel-create-analytics-rule.png" alt-text="Create analytics rule":::
-
-1. The rule logic portion of the wizard has been pre-populated with the following items:
-
- - The query that will be used in the rule.
-
- - Entity mappings, which tell Microsoft Sentinel how to recognize entities like Accounts, IP addresses, and URLs, so that **incidents** and **investigations** understand how to work with the data in any security alerts generated by this rule.
-
- - The schedule to run this rule.
-
- - The number of query results needed before a security alert is generated.
-
- The default settings in the template are:
-
- - Run once an hour.
-
- - Match any IP address threat indicators from the **ThreatIntelligenceIndicator** table with any IP address found in the last one hour of events from the **AzureActivity** table.
-
- - Generate a security alert if the query results are greater than zero, meaning if any matches are found.
-
- You can leave the default settings or change them to meet your requirements, and you can define incident-generation settings on the **Incident settings** tab. For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md). When you are finished, select the **Automated response** tab.
-
-1. Configure any automation youΓÇÖd like to trigger when a security alert is generated from this analytics rule. Automation in Microsoft Sentinel is done using combinations of **automation rules** and **playbooks** powered by Azure Logic Apps. To learn more, see this [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](./tutorial-respond-threats-playbook.md). When finished, select the **Next: Review >** button to continue.
-
-1. When you see the message that the rule validation has passed, select the **Create** button and you are finished.
-
-You can find your enabled rules in the **Active rules** tab of the **Analytics** section of Microsoft Sentinel. You can edit, enable, disable, duplicate, or delete the active rule from there. The new rule runs immediately upon activation, and from then on will run on its defined schedule.
-
-According to the default settings, each time the rule runs on its schedule, any results found will generate a security alert. Security alerts in Microsoft Sentinel can be viewed in the **Logs** section of Microsoft Sentinel, in the **SecurityAlert** table under the **Microsoft Sentinel** group.
-
-In Microsoft Sentinel, the alerts generated from analytics rules also generate security incidents, which can be found in **Incidents** under **Threat Management** on the Microsoft Sentinel menu. Incidents are what your security operations teams will triage and investigate to determine the appropriate response actions. You can find detailed information in this [Tutorial: Investigate incidents with Microsoft Sentinel](./investigate-cases.md).
-
-IMPORTANT: Microsoft Sentinel refreshes indicators every 12 days to make sure they are available for matching purposes through the analytic rules.
-
-## Detect threats using matching analytics (Public preview)
-
-> [!IMPORTANT]
-> Matching analytics is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-[Create a rule](detect-threats-built-in.md#use-built-in-analytics-rules) using the built-in **Microsoft Threat Intelligence Analytics** analytics rule template to have Microsoft Sentinel match Microsoft-generated threat intelligence data with the logs you've ingested in to Microsoft Sentinel.
-
-Matching threat intelligence data with your logs helps to generate high-fidelity alerts and incidents, with appropriate severities applied. When a match is found, any alerts generated are grouped into incidents.
-
-Alerts are grouped on a per-observable basis, over a 24-hour timeframe. So, for example, all alerts generated in a 24-hour time period that match the `abc.com` domain are grouped into a single incident.
-
-### Triage through an incident generated by matching analytics
-
-If you have a match found, any alerts generated are grouped into incidents.
-
-Use the following steps to triage through the incidents generated by the **Microsoft Threat Intelligence Analytics** rule:
-
-1. In the Microsoft Sentinel workspace where you've enabled the **Microsoft Threat Intelligence Analytics** rule, select **Incidents** and search for **Microsoft Threat Intelligence Analytics**.
-
- Any incidents found are shown in the grid.
-
-1. Select **View full details** to view entities and other details about the incident, such as specific alerts.
-
- For example:
-
- :::image type="content" source="media/work-with-threat-indicators/matching-analytics.png" alt-text="Sample matched analytics details.":::
-
-When a match is found, the indicator is also published to the Log Analytics **ThreatIntelligenceIndicators**, and displayed in the **Threat Intelligence** page. For any indicators published from this rule, the source is defined as **Microsoft Threat Intelligence Analytics**.
-
-For example, in the **ThreatIntelligenceIndicators** log:
--
-In the **Threat Intelligence** page:
--
-### Supported log sources for matching analytics
-
-The Microsoft Threat Intelligence Matching Analytics matches the log sources in the following tables with domain, IP and Microsoft Defender Threat Intelligence (MDTI) indicators.
-
-#### [Domain](#tab/domain)
-
-| Log source | Description |
-| | |
-| [CEF](connect-common-event-format.md) | Matching is done for all CEF logs that are ingested in the Log Analytics **CommonSecurityLog** table, except when the `DeviceVendor` is `Cisco`. <br><br>To match Microsoft generated threat intelligence with domain indicators in CEF logs, make sure to map the domain in the `RequestURL` field of the CEF log.|
-| [DNS](./data-connectors-reference.md#windows-dns-server-preview) | Matching is done for all DNS logs that are lookup queries from clients to DNS services (`SubType == "LookupQuery"`). DNS queries are only processed for IPv4 (`QueryType="A"`) and IPv6 queries (`QueryType="AAAA"`).<br><br>To match Microsoft generated threat intelligence with domain indicators in DNS logs, no manual mapping of columns is needed. All columns are standard from Windows DNS Server, and the domains will be in the `Name` column by default.|
-| [Syslog](connect-syslog.md) | Matching is only done for Syslog events where the `Facility` is `cron`. <br><br>To match Microsoft generated threat intelligence with domain indicators from Syslog, no manual mapping of columns is needed. The details originate from the `SyslogMessage` field by default and the rule parses the domain directly from it.|
-
-#### [IPv4](#tab/ipv4)
-
-| Log source | Description |
-| | |
-|[CEF](connect-common-event-format.md) | Matching is done for all CEF logs that are ingested in the Log Analytics **CommonSecurityLog** table, except when the `DeviceVendor` is `Cisco`. <br><br>To match Microsoft generated threat intelligence with IP indicators in CEF logs, no manual mapping needs to be done. The IP is populated in the `DestinationIP` field by default.|
-| [DNS](./data-connectors-reference.md#windows-dns-server-preview) | Matching is done for all DNS logs that are lookup queries from clients to DNS services (`SubType == "LookupQuery"`). DNS queries are only processed for IPv4 (`QueryType="A"`). <br><br>To match Microsoft generated threat intelligence with IP indicators in DNS logs, no manual mapping of columns is needed. All columns are standard from Windows DNS Server, and the IPs will be in the `IPAddresses` column by default.|
-| [Syslog](connect-syslog.md) | Matching is only done for Syslog events where the `Facility` is `cron`. <br><br>To match Microsoft generated threat intelligence with IP indicators from Syslog, no manual mapping of columns is needed. The details originate from the `SyslogMessage` field by default and the rule parses the IP directly from it.|
-
-Microsoft Threat Intelligence Matching Analytics only matches IPv4 indicators.
-
-#### [Microsoft Defender Threat Intelligence (MDTI)](#tab/microsoft-defender-threat-intelligence)
-| Log source | Description |
-| | |
-|[CEF](connect-common-event-format.md) | Matching is done for all CEF logs that are ingested in the Log Analytics **CommonSecurityLog** table, except when the `DeviceVendor` is `Cisco`. <br><br>To match Microsoft generated threat intelligence with MDTI indicators in CEF logs, no manual mapping needs to be done. The URL is populated in the `RequestURL` field by default.|
--- ## Workbooks provide insights about your threat intelligence You can use a purpose-built Microsoft Sentinel workbook to visualize key information about your threat intelligence in Microsoft Sentinel, and you can easily customize the workbook according to your business needs.
service-bus-messaging Message Transfers Locks Settlement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-transfers-locks-settlement.md
Using any of the supported Service Bus API clients, send operations into Service
If the message is rejected by Service Bus, the rejection contains an error indicator and text with a **tracking-id** in it. The rejection also includes information about whether the operation can be retried with any expectation of success. In the client, this information is turned into an exception and raised to the caller of the send operation. If the message has been accepted, the operation silently completes.
-When you use the AMQP protocol, which is the exclusive protocol for the .NET Standard, Java, JavaScript, Python, and Go clients, and [an option for the .NET Framework client](service-bus-amqp-dotnet.md), message transfers and settlements are pipelined and asynchronous. We recommend that you use the asynchronous programming model API variants.
-
+Advanced Messaging Queuing Protocol (AMQP) is the only protocol supported for .NET Standard, Java, JavaScript, Python, and Go clients. For [.NET Framework clients](service-bus-amqp-dotnet.md), you can use Service Bus Messaging Protocol (SBMP) or AMQP. When you use the AMQP protocol, message transfers and settlements are pipelined and asynchronous. We recommend that you use the asynchronous programming model API variants.
+
A sender can put several messages on the wire in rapid succession without having to wait for each message to be acknowledged, as would otherwise be the case with the SBMP protocol or with HTTP 1.1. Those asynchronous send operations complete as the respective messages are accepted and stored, on partitioned entities or when send operation to different entities overlap. The completions might also occur out of the original send order. The strategy for handling the outcome of send operations can have immediate and significant performance impact for your application. The examples in this section are written in C# and apply to Java futures, Java monos, JavaScript promises, and equivalent concepts in other languages.
storage-mover Project Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/project-manage.md
Remove-AzStorageMoverProject `
## Next steps After your projects are created, you can begin working with job definitions.
-Check back soon for a guide on how to manage job definitions.
+> [!div class="nextstepaction"]
+> [Define a migration job](job-definition-create.md)
storage Customer Managed Keys Configure Cross Tenant Existing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-existing-account.md
Previously updated : 10/03/2022 Last updated : 10/04/2022
After you've specified the key from the key vault in the customer's tenant, the
### [PowerShell](#tab/azure-powershell)
-N/A
+To configure cross-tenant customer-managed keys for a new storage account in PowerShell, first install the [Az.Storage PowerShell module](https://www.powershellgallery.com/packages/Az.Storage/4.4.2-preview), version 4.4.2-preview.
+
+Next, call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount), providing the resource ID for the user-assigned managed identity that you configured previously in the ISV's subscription, and the application (client) ID for the multi-tenant application that you configured previously in the ISV's subscription. Provide the key vault URI and key name from the customer's key vault.
+
+Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples.
+
+```azurepowershell
+$accountName = "<storage-account>"
+$kvUri = "<key-vault-uri>"
+$keyName = "<keyName>"
+$multiTenantAppId = "<multi-tenant-app-id>"
+
+Set-AzStorageAccount -ResourceGroupName $rgName `
+ -Name $accountName `
+ -KeyvaultEncryption `
+ -UserAssignedIdentityId $userIdentity.Id `
+ -IdentityType SystemAssignedUserAssigned `
+ -KeyName $keyName `
+ -KeyVaultUri $kvUri `
+ -KeyVaultUserAssignedIdentityId $userIdentity.Id `
+ -KeyVaultFederatedClientId $multiTenantAppId
+```
### [Azure CLI](#tab/azure-cli)
storage Customer Managed Keys Configure Cross Tenant New Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-new-account.md
Previously updated : 10/03/2022 Last updated : 10/04/2022
To configure cross-tenant customer-managed keys for a new storage account in the
To configure cross-tenant customer-managed keys for a new storage account in PowerShell, first install the [Az.Storage PowerShell module](https://www.powershellgallery.com/packages/Az.Storage/4.4.2-preview), version 4.4.2-preview.
-Next, call [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount), providing the resource ID for the user-assigned managed identity that you configured previously in the ISV's subscription, and the application (client) ID for the multi-tenant application that you configured previously in the ISV's subscription. Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples.
+Next, call [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount), providing the resource ID for the user-assigned managed identity that you configured previously in the ISV's subscription, and the application (client) ID for the multi-tenant application that you configured previously in the ISV's subscription. Provide the key vault URI and key name from the customer's key vault.
+
+Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples.
```azurepowershell $accountName = "<account-name>"
-$keyVaultUri = "<key-vault-uri>"
+$kvUri = "<key-vault-uri>"
$keyName = "<keyName>" $location = "<location>" $multiTenantAppId = "<application-id>"
New-AzStorageAccount -ResourceGroupName $rgName `
-UserAssignedIdentityId $userIdentity.Id ` -IdentityType SystemAssignedUserAssigned ` -KeyName $keyName `
- -KeyVaultUri $keyVaultUri `
+ -KeyVaultUri $kvUri `
-KeyVaultUserAssignedIdentityId $userIdentity.Id ` -KeyVaultFederatedClientId $multiTenantAppId ```
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Previously updated : 03/31/2022 Last updated : 10/04/2022
By default, storage accounts accept connections from clients on any network. You
> [!WARNING] > Changing this setting can impact your application's ability to connect to Azure Storage. Make sure to grant access to any allowed networks or set up access through a [private endpoint](storage-private-endpoints.md) before you change this setting.
-
+ ### [Portal](#tab/azure-portal) 1. Go to the storage account you want to secure.
By default, storage accounts accept connections from clients on any network. You
+> [!CAUTION]
+> By design, access to a storage account from trusted services takes the highest precedence over other network access restrictions. For this reason, if you set **Public network access** to **Disabled** after previously setting it to **Enabled from selected virtual networks and IP addresses**, any [resource instances](#grant-access-from-azure-resource-instances) and [exceptions](#manage-exceptions) you had previously configured, including [Allow Azure services on the trusted services list to access this storage account](#grant-access-to-trusted-azure-services), will remain in effect. As a result, those resources and services may still have access to the storage account after setting **Public network access** to **Disabled**.
+ ## Grant access from a virtual network You can configure storage accounts to allow access only from specific subnets. The allowed subnets may belong to a VNet in the same subscription, or those in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant.
When planning for disaster recovery during a regional outage, you should create
To enable access from a virtual network that is located in another region over service endpoints, register the `AllowGlobalTagsForStorage` feature in the subscription of the virtual network. All the subnets in the subscription that has the _AllowedGlobalTagsForStorage_ feature enabled will no longer use a public IP address to communicate with any storage account. Instead, all the traffic from these subnets to storage accounts will use a private IP address as a source IP. As a result, any storage accounts that use IP network rules to permit traffic from those subnets will no longer have an effect. > [!NOTE]
-> For updating the existing service endpoints to access a storage account in another region, perform an [update subnet](/cli/azure/network/vnet/subnet?view=azure-cli-latest#az-network-vnet-subnet-update) operation on the subnet after registering the subscription with the `AllowGlobalTagsForStorage` feature. Similarly, to go back to the old configuration, perform an [update subnet](/cli/azure/network/vnet/subnet?view=azure-cli-latest#az-network-vnet-subnet-update) operation after deregistering the subscription with the `AllowGlobalTagsForStorage` feature.
+> For updating the existing service endpoints to access a storage account in another region, perform an [update subnet](/cli/azure/network/vnet/subnet?view=azure-cli-latest#az-network-vnet-subnet-update&preserve-view=true) operation on the subnet after registering the subscription with the `AllowGlobalTagsForStorage` feature. Similarly, to go back to the old configuration, perform an [update subnet](/cli/azure/network/vnet/subnet?view=azure-cli-latest#az-network-vnet-subnet-update&preserve-view=true) operation after deregistering the subscription with the `AllowGlobalTagsForStorage` feature.
#### [Portal](#tab/azure-portal)
storage Storage Files Identity Auth Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-enable.md
Previously updated : 09/29/2022 Last updated : 10/04/2022 # Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares-
-[Azure Files](storage-files-introduction.md) supports identity-based authentication over Server Message Block (SMB) using three different methods:
--- On-premises Active Directory Domain Services (AD DS)-- Azure Active Directory Domain Services (Azure AD DS)-- Azure Active Directory (Azure AD) Kerberos for hybrid identities (preview) We strongly recommend that you review the [How it works section](./storage-files-active-directory-overview.md#how-it-works) to select the right AD source for authentication. The setup is different depending on the domain service you choose. This article focuses on enabling and configuring on-premises AD DS for authentication with Azure file shares.
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md
description: Learn about the capacity, IOPS, and throughput rates for Azure file
Previously updated : 01/31/2022 Last updated : 10/04/2022
File scale targets apply to individual files stored in Azure file shares.
| Maximum concurrent request rate | 1,000 IOPS | Up to 8,000<sup>1</sup> | | Maximum ingress for a file | 60 MiB/sec | 200 MiB/sec (Up to 1 GiB/s with SMB Multichannel)<sup>2</sup>| | Maximum egress for a file | 60 MiB/sec | 300 MiB/sec (Up to 1 GiB/s with SMB Multichannel)<sup>2</sup> |
-| Maximum concurrent handles | 2,000 handles | 2,000 handles |
+| Maximum concurrent handles per file, directory, and share root | 2,000 handles | 2,000 handles |
<sup>1 Applies to read and write IOs (typically smaller IO sizes less than or equal to 64 KiB). Metadata operations, other than reads and writes, may be lower.</sup> <sup>2 Subject to machine network limits, available bandwidth, IO sizes, queue depth, and other factors. For details see [SMB Multichannel performance](./storage-files-smb-multichannel-performance.md).</sup>
synapse-analytics Connect Synapse Link Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/connect-synapse-link-sql-database.md
This article provides a step-by-step guide for getting started with Azure Synaps
## Prerequisites
-* [Create a new Synapse workspace](https://portal.azure.com/#create/Microsoft.Synapse) to get Azure Synapse Link for SQL. The current tutorial is to create Synapse link for SQL in public network. The assumption is that you have checked "Disable Managed virtual network" and "Allow connections from all IP address" when creating Synapse workspace. If you want to configure Synapse link for Azure SQL Database with network security, please also refer to [this](connect-synapse-link-sql-database-vnet.md).
+* [Create a new Synapse workspace](https://portal.azure.com/#create/Microsoft.Synapse) to get Azure Synapse Link for SQL. The current tutorial is to create Synapse link for SQL in public network. The assumption is that you have checked "Disable Managed virtual network" and "Allow connections from all IP address" when creating Synapse workspace. If you want to configure Synapse link for Azure SQL Database with network security, please also refer to [Configure Synapse link for Azure SQL Database with network security](connect-synapse-link-sql-database-vnet.md).
* For DTU-based provisioning, make sure your Azure SQL Database service is at least Standard tier with a minimum of 100 DTUs. Free, Basic, or Standard tiers with fewer than 100 DTUs provisioned are not supported.
You can stop the Azure Synapse Link connection in Synapse Studio as follows:
> [!NOTE] > If you restart a link connection after stopping it, it will start from a full initial load from your source database followed by incremental change feeds. - ## Next steps If you are using a different type of database, see how to:
If you are using a different type of database, see how to:
* [Configure Azure Synapse Link for Azure Cosmos DB](../../cosmos-db/configure-synapse-link.md?context=/azure/synapse-analytics/context/context) * [Configure Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/azure-synapse-link-synapse?context=/azure/synapse-analytics/context/context) * [Get started with Azure Synapse Link for SQL Server 2022](connect-synapse-link-sql-server-2022.md)
+* [Get or set a managed identity for an Azure SQL Database logical server or managed instance](/sql/azure-sql/database/authentication-azure-ad-user-assigned-managed-identity.md#get-or-set-a-managed-identity-for-a-logical-server-or-managed-instance)
time-series-insights How To Tsi Gen2 Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-tsi-gen2-migration.md
Data
:::image type="content" source="media/gen2-migration/adx-log-analytics.png" alt-text="Screenshot of the Azure Data Explorer Log Analytics Workspace" lightbox="media/gen2-migration/adx-log-analytics.png"::: 1. Data partitioning.
- 1. For small size data, the default ADX partitioning is enough. For more complex scenario, with large datasets and right push rate custom ADX data partitioning is more appropriate. Data partitioning is beneficial for scenarios, as follows:
- 1. Improving query latency in big data sets.
- 1. When querying historical data.
- 1. When ingesting out-of-order data.
- 1. The custom data partitioning should include:
- 1. The timestamp column, which results in time-based partitioning of extents.
- 1. A string-based column, which corresponds to the Time Series ID with highest cardinality.
- 1. An example of data partitioning containing a Time Series ID column and a timestamp column is:
-
-```
-.alter table events policy partitioning
- {
- "PartitionKeys": [
- {
- "ColumnName": "timeSeriesId",
- "Kind": "Hash",
- "Properties": {
- "Function": "XxHash64",
- "MaxPartitionCount": 32,
- "PartitionAssignmentMode": "Uniform"
- }
- },
- {
- "ColumnName": "timestamp",
- "Kind": "UniformRange",
- "Properties": {
- "Reference": "1970-01-01T00:00:00",
- "RangeSize": "1.00:00:00",
- "OverrideCreationTime": true
- }
- }
- ] ,
- "EffectiveDateTime": "1970-01-01T00:00:00",
- "MinRowCountPerOperation": 0,
- "MaxRowCountPerOperation": 0,
- "MaxOriginalSizePerOperation": 0
- }
-```
-For more references, check [ADX Data Partitioning Policy](/azure/data-explorer/kusto/management/partitioningpolicy).
+ 1. For most data sets, the default ADX partitioning is enough.
+ 1. Data partitioning is beneficial in a very specific set of scenarios, and shouldn't be applied otherwise:
+ 1. Improving query latency in big data sets where most queries filter on a high cardinality string column, e.g. a time-series ID.
+ 1. When ingesting out-of-order data, e.g. when events from the past may be ingested days or weeks after their generation in the origin.
+ 1. For more information, check [ADX Data Partitioning Policy](/azure/data-explorer/kusto/management/partitioningpolicy).
#### Prepare for Data Ingestion
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Title: What's new in Azure Virtual Desktop? - Azure
description: New features and product updates for Azure Virtual Desktop. Previously updated : 09/06/2022 Last updated : 10/04/2022
Azure Virtual Desktop updates regularly. This article is where you'll find out a
Make sure to check back here often to keep up with new updates.
+## September 2022
+
+Here's what changed in September 2022:
+
+### Single sign-on and passwordless authentication now in public preview
+
+The ability to enable an Azure Active Directory (AD)-based single sign-on experience and support for passwordless authentication, using Windows Hello and security devices (like FIDO2 keys) is now in public preview. This feature is available for Windows 10, Windows, 11 and Windows Server 2022 session hosts with the September Cumulative Update Preview installed. The single sign-on experience is currently compatible with the Windows Desktop and web clients. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-public-preview-of-sso-and-passwordless-authentication/ba-p/3638244).
+
+### Connection graphics data logs for Azure Virtual Desktop now in public preview
+
+The ability to collect graphics data for your Azure Virtual Desktop connections through Azure Log Analytics is now in public preview. This data can help administrators understand factors across the server, client, and network that contribute to slow or choppy experiences for a user. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/collect-and-query-graphics-data-for-azure-virtual-desktop/m-p/3638565).
+
+### Multimedia redirection enhancements now in public preview
+
+An upgraded version of multimedia redirection (MMR) for Azure Virtual Desktop is now in public preview. We've made various improvements to this version, including more supported websites, remote app browser support, and enhancements to media controls for better clarity and one-click tracing. Learn more at [Use multimedia redirection on Azure Virtual Desktop (preview)](multimedia-redirection.md) and [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/new-multimedia-redirection-upgrades-on-azure-virtual-desktop-are/m-p/3639520).
+
+### Grouping costs by Azure Virtual Desktop host pool now in public preview
+
+Microsoft Cost Management has a new feature in public preview that lets you group Azure Virtual Desktop costs with Azure tags by using the cm-resource-parent tag key. Cost grouping makes it easier to understand and manage costs by host pool. Learn more at [Tag Azure Virtual Desktop resources to manage costs](tag-virtual-desktop-resources.md) and [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/group-costs-by-host-pool-with-cost-management-now-in-public/ba-p/3638285).
+ ## August 2022 Here's what changed in August 2022:
+### Azure portal updates
+
+We've made the following updates to the Azure portal:
+
+- Improved search, filtering, and performance.
+- Added Windows Server 2022 images to the image selection list.
+- Added "Preferred group type" to the "Basics" tab in the host pool creation process.
+- Enabled custom images for trusted launch VMs.
+- New selectable cards, including the following:
+ - Unavailable machines.
+ - User session.
+- Removed the "Advanced" tab for the process to add a VM to the host pool.
+- Removed the storage blob image option from the host pool creation and adding VM processes.
+- Bug fixes.
+- Made the following improvements to the "getting started" setup process:
+ - Unchecked link Azure template.
+ - Removed validation on existing domain admins.
+ ### Updates to the preview version of FSLogix profiles for Azure AD-joined VMs
-We've updated the public preview version of the Azure Files integration with Azure Active Directory (Azure AD) Kerberos for hybrid identities so that it's now simpler to deploy and manage. The update should give users using FSLogix user profiles on Azure AD-joined session host an overall better experience. For more information, see [the Azure Files blog post](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-leverage-azure-active-directory-kerberos-with/ba-p/3612111).
+We've updated the public preview version of the Azure Files integration with Azure AD Kerberos for hybrid identities so that it's now simpler to deploy and manage. The update should give users using FSLogix user profiles on Azure AD-joined session host an overall better experience. For more information, see [the Azure Files blog post](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-leverage-azure-active-directory-kerberos-with/ba-p/3612111).
### Single sign-on and passwordless authentication now in Windows Insider preview
-In the Windows Insider build of Windows 11 22H2, you can now enable a preview version of the Azure Active Directory (AD)-based single sign-on experience. This Windows Insider build also supports passwordless authentication with Windows Hello and security devices like FIDO2 keys. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/insider-preview-single-sign-on-and-passwordless-authentication/m-p/3608842).
+In the Windows Insider build of Windows 11 22H2, you can now enable a preview version of the Azure AD-based single sign-on experience. This Windows Insider build also supports passwordless authentication with Windows Hello and security devices like FIDO2 keys. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/insider-preview-single-sign-on-and-passwordless-authentication/m-p/3608842).
### Universal Print for Azure Virtual Desktop now in Windows Insider preview
virtual-machines Disks Cross Tenant Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-cross-tenant-customer-managed-keys.md
description: Learn how to use customer-managed keys with your Azure disks in dif
Previously updated : 09/23/2022 Last updated : 10/04/2022
If you have questions about cross-tenant customer-managed keys with managed disk
## Limitations -- Currently this feature is only available in the North Central US, West Central US, and West US regions.
+- Currently this feature is only available in the North Central US, West Central US, West US, East US 2, and North Europe regions.
- Managed Disks and the customer's Key Vault must be in the same Azure region, but they can be in different subscriptions. - This feature doesn't support Ultra Disks or Azure Premium SSD v2 managed disks.
virtual-network Nat Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-availability-zones.md
If your scenario requires inbound endpoints, you have two options:
| (1) | **Align** the inbound endpoints with the respective **zonal stacks** you're creating for outbound. | Create a standard load balancer with a zonal frontend. | Same failure model for inbound and outbound. Simpler to operate. | Individual IP addresses per zone may need to be masked by a common DNS name. | | (2) | **Overlay** the zonal stacks with a cross-zone inbound endpoint. | Create a standard load balancer with a zone-redundant front-end. | Single IP address for inbound endpoint. | Varying models for inbound and outbound. More complex to operate. |
-Note that zonal configuration for a load balancer works differently from NAT gateway. The load balancer's availability zone selection is synonymous with its frontend IP configuration's zone selection. For public load balancers, if the public IP in the Load balancer's frontend is zone redundant then the load balancer is also zone-redundant. If the public IP in the load balancer's frontend is zonal, then the load balancer will also be designated to the same zone.
+> [!NOTE]
+> Note that zonal configuration for a load balancer works differently from NAT gateway. The load balancer's availability zone selection is synonymous with its frontend IP configuration's zone selection. For public load balancers, if the public IP in the Load balancer's frontend is zone redundant then the load balancer is also zone-redundant. If the public IP in the load balancer's frontend is zonal, then the load balancer will also be designated to the same zone.
## Limitations
virtual-wan Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/whats-new.md
The following features are currently in gated public preview. After working with
|||||| |1|Virtual hub router upgrade: Compatibility with NVA in a hub.|For deployments with an NVA provisioned in the hub, the virtual hub router can't be upgraded to VMSS.| July 2022|The Virtual WAN team is working on a fix that will allow Virtual hub routers to be upgraded to VMSS, even if an NVA is provisioned in the hub. After upgrading, users will have to re-peer the NVA with the hub routerΓÇÖs new IP addresses (instead of having to delete the NVA).| |2|Virtual hub router upgrade: Compatibility with NVA in a spoke VNet.|For deployments with an NVA provisioned in a spoke VNet, the customer will have to delete and recreate the BGP peering with the spoke NVA.|March 2022|The Virtual WAN team is working on a fix to remove the need for users to delete and recreate the BGP peering with a spoke NVA after upgrading.|
-|3|Virtual hub router upgrade: Spoke VNets in different region than the Virtual hub.|If one or more spoke VNets are in a different region than the virtual hub, then these VNet connections will have to be deleted and recreated after the hub router is upgraded|August 2022|The Virtual WAN team is working on a fix to remove the need for users to delete and recreate these VNet connections after upgrading the hub router.|
-|4|Virtual hub router upgrade: More than 100 Spoke VNets connected to the Virtual hub.|If there are more than 100 spoke VNets connected to the virtual hub, then the virtual hub router can't be upgraded.|September 2022|The Virtual WAN team is working on removing this limitation of 100 spoke VNets connected to the virtual hub during the router upgrade.|
+|3|Virtual hub router upgrade: More than 100 Spoke VNets connected to the Virtual hub.|If there are more than 100 spoke VNets connected to the virtual hub, then the virtual hub router can't be upgraded.|September 2022|The Virtual WAN team is working on removing this limitation of 100 spoke VNets connected to the virtual hub during the router upgrade.|
## Next steps
vpn-gateway Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/nat-overview.md
Another consideration is the address pool size for translation. If the target ad
> [!IMPORTANT] > * NAT is supported on the following SKUs: VpnGw2~5, VpnGw2AZ~5AZ. > * NAT is supported on IPsec cross-premises connections only. VNet-to-VNet connections or P2S connections are not supported.
+> * Every Dynamic NAT rule can be assigned to a single connection.
## <a name="mode"></a>NAT mode: ingress & egress
web-application-firewall Waf Front Door Drs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md
Previously updated : 08/28/2022 Last updated : 10/05/2022 # Web Application Firewall DRS rule groups and rules
Custom rules are always applied before rules in the Default Rule Set are evaluat
The Microsoft Threat Intelligence Collection rules are written in partnership with the Microsoft Threat Intelligence team to provide increased coverage, patches for specific vulnerabilities, and better false positive reduction.
+Some of the built-in DRS rules are disabled by default because they've been replaced by newer rules in the Microsoft Threat Intelligence Collection. For example, rule ID 942440, *SQL Comment Sequence Detected.*, has been disabled, and replaced by the Microsoft Threat Intelligence Collection rule 99031002. The replaced rule reduces the risk of false positive detections from legitimate requests.
+ ### <a name="anomaly-scoring-mode"></a>Anomaly scoring When you use DRS 2.0 or later, your WAF uses *anomaly scoring*. Traffic that matches any rule isn't immediately blocked, even when your WAF is in prevention mode. Instead, the OWASP rule sets define a severity for each rule: *Critical*, *Error*, *Warning*, or *Notice*. The severity affects a numeric value for the request, which is called the *anomaly score*:
web-application-firewall Waf Front Door Rate Limit Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-rate-limit-configure.md
Previously updated : 09/07/2022 Last updated : 10/05/2022 zone_pivot_groups: web-application-firewall-configuration
The Azure Web Application Firewall (WAF) rate limit rule for Azure Front Door co
This article shows how to configure a WAF rate limit rule on Azure Front Door Standard and Premium tiers. ## Scenario
$frontDoorSecurityPolicy = New-AzFrontDoorCdnSecurityPolicy `
-Parameter $securityPolicyParameters ``` ++
+## Prerequisites
+
+Before you begin to set up a rate limit policy, set up your Azure CLI environment and create a Front Door profile.
+
+### Set up your Azure CLI environment
+
+The Azure CLI provides a set of commands that use the [Azure Resource Manager](../../azure-resource-manager/management/overview.md) model for managing your Azure resources.
+
+You can install the [Azure CLI](/cli/azure/install-azure-cli) on your local machine and use it in any shell session. Here you sign in with your Azure credentials and install the Azure CLI extension for Front Door Standard/Premium.
+
+#### Connect to Azure with an interactive dialog for sign-in
+
+Sign in to Azure by running the following command:
+
+```azurecli
+az login
+```
+
+#### Install the Front Door extension for the Azure CLI
+
+Install the `front-door` extension to work with the Front Door WAF from the Azure CLI:
+
+```azurecli
+az extension add --name front-door
+```
+
+You use the `az afd` commands to work with Front Door Standard/Premium resources, and you use the `az network front-door waf-policy` commands to work with WAF resources.
+
+### Create a resource group
+
+Use the [az group create](/cli/azure/group#az-group-create) command to create a new resource group for your Front Door profile and WAF policy. Update the resource group name and location for your own requirements:
+
+```azurecli
+resourceGroupName='FrontDoorRateLimit'
+
+az group create \
+ --name $resourceGroupName \
+ --location westus
+```
+
+## Create a Front Door profile
+
+Use the [az afd profile create](/cli/azure/afd/profile#az-afd-profile-create) command to create a new Front Door profile.
+
+In this example, you create a Front Door standard profile named *MyFrontDoorProfile*:
+
+```azurecli
+frontDoorProfileName='MyFrontDoorProfile'
+
+az afd profile create \
+ --profile-name $frontDoorProfileName \
+ --resource-group $resourceGroupName \
+ --sku Standard_AzureFrontDoor
+```
+
+### Create a Front Door endpoint
+
+Use the [az afd endpoint create](/cli/azure/afd/endpoint#az-afd-endpoint-create) command to add an endpoint to your Front Door profile.
+
+Front Door endpoints must have globally unique names, so update the value of the `frontDoorEndpointName` variable to something unique.
+
+```azurecli
+frontDoorEndpointName='<unique-front-door-endpoint-name>'
+
+az afd endpoint create \
+ --endpoint-name $frontDoorEndpointName \
+ --profile-name $frontDoorProfileName \
+ --resource-group $resourceGroupName \
+```
+
+## Create a WAF policy
+
+Use the [az network front-door waf-policy create](/cli/azure/network/front-door/waf-policy#az-network-front-door-waf-policy-create) command to create a WAF policy:
+
+```azurecli
+wafPolicyName='MyWafPolicy'
+
+az network front-door waf-policy create \
+ --name $wafPolicyName \
+ --resource-group $resourceGroupName \
+ --sku Standard_AzureFrontDoor
+```
+
+## Prepare to add a custom rate limit rule
+
+Use the [az network front-door waf-policy rule create](/cli/azure/network/front-door/waf-policy/rule#az-network-front-door-waf-policy-rule-create) command to create a custom rate limit rule. The following example sets the limit to 1000 requests per minute.
+
+Rate limit rules must contain a match condition, which you create in the next step. So, in this command, you include the `--defer` argument, which tells the Azure CLI not to submit the rule to Azure just yet.
+
+```azurecli
+az network front-door waf-policy rule create \
+ --name rateLimitRule \
+ --policy-name $wafPolicyName \
+ --resource-group $resourceGroupName \
+ --rule-type RateLimitRule \
+ --rate-limit-duration 1 \
+ --rate-limit-threshold 1000 \
+ --action Block \
+ --priority 1 \
+ --defer
+```
+
+When any client IP address sends more than 1000 requests within one minute, the WAF blocks subsequent requests until the next minute starts.
+
+## Add a match condition
+
+Use the [az network front-door waf-policy rule match-condition add](/cli/azure/network/front-door/waf-policy/rule/match-condition#az-network-front-door-waf-policy-rule-match-condition-add) command to add a match condition to your custom rule. The match condition identifies requests that should have the rate limit applied.
+
+The following example matches requests where the *RequestUri* variable contains the string */promo*:
+
+```azurecli
+az network front-door waf-policy rule match-condition add \
+ --match-variable RequestUri \
+ --operator Contains \
+ --values '/promo' \
+ --name rateLimitRule \
+ --policy-name $wafPolicyName \
+ --resource-group $resourceGroupName
+```
+
+When you submit this command, the Azure CLI creates the rate limit rule and match condition together.
+
+## Configure a security policy to associate your Front Door profile with your WAF policy
+
+Use the [az afd security-policy create](/cli/azure/afd/security-policy#az-afd-security-policy-create) command to create a security policy for your Front Door profile. A security policy associates your WAF policy with domains that you want to be protected by the WAF rule.
+
+In this example, you associate the endpoint's default hostname with your WAF policy:
+
+```azurecli
+securityPolicyName='MySecurityPolicy'
+
+wafPolicyResourceId=$(az network front-door waf-policy show --name $wafPolicyName --resource-group $resourceGroupName --query id --output tsv)
+frontDoorEndpointResourceId=$(az afd endpoint show --endpoint-name $frontDoorEndpointName --profile-name $frontDoorProfileName --resource-group $resourceGroupName --query id --output tsv)
+
+az afd security-policy create \
+ --security-policy-name $securityPolicyName \
+ --profile-name $frontDoorProfileName \
+ --resource-group $resourceGroupName \
+ --domains $frontDoorEndpointResourceId \
+ --waf-policy $wafPolicyResourceId
+```
+
+The preceding code looks up the Azure resource identifiers for the WAF policy and Front Door endpoint so that it can associate them with your security policy.
+++ > [!NOTE] > Whenever you make changes to your WAF policy, you don't need to recreate the Front Door security policy. WAF policy updates are automatically applied to the Front Door domains.