Updates from: 05/11/2022 01:08:21
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Provision On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/provision-on-demand.md
Use on-demand provisioning to provision a user into an application in seconds. A
1. Search for a user by first name, last name, display name, user principal name, or email address. > [!NOTE] > For Cloud HR provisioning app (Workday/SuccessFactors to AD/Azure AD), the input value is different.
- > For Workday scenario, please provide "WID" of the user in Workday.
+ > For Workday scenario, please provide "WorkerID" or "WID" of the user in Workday.
> For SuccessFactors scenario, please provide "personIdExternal" of the user in SuccessFactors. 1. Select **Provision** at the bottom of the page.
active-directory Scenario Spa App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-spa-app-registration.md
Previously updated : 10/12/2021 Last updated : 05/10/2022 # Customer intent: As an application developer, I want to know how to write a single-page application by using the Microsoft identity platform.
To register a single-page application (SPA) in the Microsoft identity platform,
For both MSAL.js 1.0- and 2.0-based applications, start by completing the following steps to create the initial app registration. 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations** > **New registration**. 1. Enter a **Name** for your application. Users of your app might see this name, and you can change it later.
active-directory B2b Quickstart Add Guest Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md
Title: 'Quickstart: Add guest users in the Azure portal - Azure AD'
+ Title: 'Quickstart: Add a guest user and send an invitation - Azure AD'
description: Use this quickstart to learn how Azure AD admins can add B2B guest users in the Azure portal and walk through the B2B invitation workflow. - Previously updated : 06/18/2020 Last updated : 05/10/2022
-#Customer intent: As a tenant admin, I want to walk through the B2B invitation workflow so that I can understand how to add a guest user in the portal, and understand the end user experience.
+#Customer intent: As a tenant admin, I want to walk through the B2B invitation workflow so that I can understand how to add a guest user in the portal, and understand the end user experience.
-# Quickstart: Add guest users to your directory in the Azure portal
+# Quickstart: Add a guest user and send an invitation
-You can invite anyone to collaborate with your organization by adding them to your directory as a guest user. Then you can either send an invitation email that contains a redemption link or send a direct link to an app you want to share. Guest users can sign in with their own work, school, or social identities. Along with this quickstart, you can learn more about adding guest users [in the Azure portal](add-users-administrator.md), via [PowerShell](b2b-quickstart-invite-powershell.md), or [in bulk](tutorial-bulk-invite.md).
-
-In this quickstart, you'll add a new guest user to your Azure AD directory via the Azure portal, send an invitation, and see what the guest user's invitation redemption process looks like.
+With Azure AD [B2B collaboration](what-is-b2b.md), you can invite anyone to collaborate with your organization using their own work, school, or social account. In this quickstart, you'll learn how to add a new guest user to your Azure AD directory in the Azure portal. You'll also send an invitation and see what the guest user's invitation redemption process looks like. In addition to this quickstart, you can learn more about adding guest users [in the Azure portal](add-users-administrator.md), via [PowerShell](b2b-quickstart-invite-powershell.md), or [in bulk](tutorial-bulk-invite.md).
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
-To complete the scenario in this tutorial, you need:
+To complete the scenario in this quickstart, you need:
+
+- A role that allows you to create users in your tenant directory, such as the Global Administrator role or a limited administrator directory role (for example, Guest inviter or User administrator).
+- Access to a valid email address outside of your Azure AD tenant, such as a separate work, school, or social email address. You'll use this email to create the guest account in your tenant directory and access the invitation.
## Add a new guest user in Azure AD
-1. Sign in to the [Azure portal](https://portal.azure.com/) as an Azure AD administrator.
-2. In the left pane, select **Azure Active Directory**.
-3. Under **Manage**, select **Users**.
+1. Sign in to the [Azure portal](https://portal.azure.com/) with an account that's been assigned the Global administrator, Guest, inviter, or User administrator role.
+
+1. Under **Azure services**, select **Azure Active Directory** (or use the search box to find and select **Azure Active Directory**).
+
+ ![Screenshot showing where to select the Azure Active Directory service.](media/quickstart-add-users-portal/azure-active-directory-service.png)
+
+1. Under **Manage**, select **Users**.
![Screenshot showing where to select the Users option](media/quickstart-add-users-portal/quickstart-users-portal-user.png)
-4. Select **New guest user**.
+1. Select **New guest user**.
- ![Screenshot showing where to select the New guest user option](media/quickstart-add-users-portal/quickstart-users-portal-user-3.png)
+ ![Screenshot showing where to select the New guest user option.](media/quickstart-add-users-portal/new-guest-user.png)
-5. On the **New user** page, select **Invite user** and then add the guest user's information.
+1. On the **New user** page, select **Invite user** and then add the guest user's information.
- **Name.** The first and last name of the guest user. - **Email address (required)**. The email address of the guest user. - **Personal message (optional)** Include a personal welcome message to the guest user. - **Groups**: You can add the guest user to one or more existing groups, or you can do it later.
- - **Directory role**: If you require Azure AD administrative permissions for the user, you can add them to an Azure AD role.
+ - **Roles**: If you require Azure AD administrative permissions for the user, you can add them to an Azure AD role.
+
+ ![Screenshot showing the new user page.](media/quickstart-add-users-portal/invite-user.png)
-6. Select **Invite** to automatically send the invitation to the guest user. A notification appears in the upper right with the message **Successfully invited user**.
-7. After you send the invitation, the user account is automatically added to the directory as a guest.
+1. Select **Invite** to automatically send the invitation to the guest user. A notification appears in the upper right with the message **Successfully invited user**.
-## Assign an app to the guest user
-Add the Salesforce app to your test tenant and assign the test guest user to the app.
-1. Sign in to the Azure portal as an Azure AD administrator.
-2. In the left pane, select **Enterprise applications**.
-3. Select **New application**.
-4. Under **Add from the gallery**, search for **Salesforce**, and then select it.
+1. After you send the invitation, the user account is automatically added to the directory as a guest.
- ![Screenshot showing the Add from the gallery search box](media/quickstart-add-users-portal/quickstart-users-portal-select-salesforce.png)
-5. Select **Add**.
-6. Under **Manage**, select **Single sign-on**, and under **Single Sign-on Mode**, select **Password-based Sign-on**, and click **Save**.
-7. Under **Manage**, select **Users and groups** > **Add user** > **Users and groups**.
-8. Use the search box to search for the test user (if necessary) and select the test user in the list. Then click **Select**.
-9. Select **Assign**.
+ ![Screenshot showing the new guest user in the directory.](media/quickstart-add-users-portal/new-guest-user-directory.png)
## Accept the invitation+ Now sign in as the guest user to see the invitation.
-1. Sign in to your test guest user's email account.
-2. In your inbox, find the "You're invited" email.
+
+1. Sign in to your test guest user's email account.
+
+1. In your inbox, open the email from "Microsoft Invitations on behalf of Contoso."
![Screenshot showing the B2B invitation email](media/quickstart-add-users-portal/quickstart-users-portal-email-small.png)
-3. In the email body, select **Get Started**. A **Review permissions** page opens in the browser.
+1. In the email body, select **Accept invitation**. A **Review permissions** page opens in the browser.
+
+ ![Screenshot showing the Review permissions page.](media/quickstart-add-users-portal/consent-screen.png)
- ![Screenshot showing the Review permissions page](media/quickstart-add-users-portal/quickstart-users-portal-accept.png)
+1. Select **Accept**.
-4. Select **Accept**. The Access Panel opens, which lists the applications the guest user can access.
+1. The **My Apps** page opens. Because we haven't assigned any apps to this guest user, you'll see the message "There are no apps to show." In a real-life scenario, you would [add the guest user to an app](add-users-administrator.md#add-guest-users-to-an-application) so the app would appear here.
## Clean up resources
-When no longer needed, delete the test guest user and the test app.
-1. Sign in to the Azure portal as an Azure AD administrator.
-2. In the left pane, select **Azure Active Directory**.
-3. Under **Manage**, select **Enterprise applications**.
-4. Open the application **Salesforce**, and then select **Delete**.
-5. In the left pane, select **Azure Active Directory**.
-6. Under **Manage**, select **Users**.
-7. Select the test user, and then select **Delete user**.
+
+When no longer needed, delete the test guest user.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/) with an account that's been assigned the Global administrator, Guest, inviter, or User administrator role.
+1. Select the **Azure Active Directory** service.
+1. Under **Manage**, select **Users**.
+1. Select the test user, and then select **Delete user**.
## Next steps
-In this tutorial, you created a guest user in the Azure portal, and sent an invitation to share apps. Then you viewed the redemption process from the guest user's perspective and verified that the app appeared on the guest user's Access Panel. To learn more about adding guest users for collaboration, see [Add Azure Active Directory B2B collaboration users in the Azure portal](add-users-administrator.md).
+
+In this quickstart, you created a guest user in the Azure portal and sent an invitation to share apps. Then you viewed the redemption process from the guest user's perspective and verified that the guest user was able to access their My Apps page. To learn more about adding guest users for collaboration, see [Add Azure Active Directory B2B collaboration users in the Azure portal](add-users-administrator.md).
active-directory Uniflow Online Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/uniflow-online-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
| **Identifier** | || | `https://<tenant_domain_name>.eu.uniflowonline.com` |
+ | `https://<tenant_domain_name>.uk.uniflowonline.com` |
| `https://<tenant_domain_name>.us.uniflowonline.com` | | `https://<tenant_domain_name>.sg.uniflowonline.com` | | `https://<tenant_domain_name>.jp.uniflowonline.com` | | `https://<tenant_domain_name>.au.uniflowonline.com` |
+
b. In the **Sign on URL** text box, type a URL using one of the following patterns: | **Sign on URL** | || | `https://<tenant_domain_name>.eu.uniflowonline.com` |
+ | `https://<tenant_domain_name>.uk.uniflowonline.com` |
| `https://<tenant_domain_name>.us.uniflowonline.com` | | `https://<tenant_domain_name>.sg.uniflowonline.com` | | `https://<tenant_domain_name>.jp.uniflowonline.com` |
active-directory Workday Writeback Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/workday-writeback-tutorial.md
In this section, you will configure how writeback attributes flow from Azure AD
>[!div class="mx-imgBorder"] >![Azure portal](./media/workday-inbound-tutorial/workday-writeback-mapping.png)
-5. Use the guidance shared below to map phone number attribute values from Azure AD to Workday.
+5. Use the guidance shared below to map phone number attribute values from Azure AD to Workday. See [Writeback expression mapping examples](#writeback-expression-mapping-examples) to configure the right expression mapping for each attribute.
| Workday phone attribute | Expected value | Mapping guidance | |-|-|| | WorkphoneLandlineIsPrimary | true/false | Constant or expression mapping whose output is "true" or "false" string value. |
- | WorkphoneLandlineCountryCodeName | [Three-letter ISO 3166-1 country code](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-3) | Constant or expression mapping whose output is a three letter country code. |
- | WorkphoneLandlineCountryCodeNumber | [International country calling code](https://en.wikipedia.org/wiki/List_of_country_calling_codes) | Constant or expression mapping whose output is a valid country code (without the + sign). |
- | WorkphoneLandlineNumber | Full phone number including the area code | Map to *telephoneNumber* attribute. Use regex to remove whitespace, brackets and country code. See example below. |
+ | WorkphoneLandlineCountryCodeName | [Three-letter ISO 3166-1 country code](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-3) | Constant or expression mapping whose output is a three letter country code. |
+ | WorkphoneLandlineCountryCodeNumber | [International country calling code](https://en.wikipedia.org/wiki/List_of_country_calling_codes) | Constant or expression mapping whose output is a valid country code (without the + sign). |
+ | WorkphoneLandlineNumber | Full phone number including the area code | Map to *telephoneNumber* attribute. Use regex to remove whitespace, brackets and country code. |
| WorkphoneLandlineExtension | Extension number | If *telephoneNumber* contains extension, use regex to extract the value. | | WorkphoneMobileIsPrimary | true/false | Constant mapping or expression mapping whose output is "true" or "false" string value | | WorkphoneMobileCountryCodeName | [Three-letter ISO 3166-1 country code](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-3) | Constant or expression mapping whose output is a three letter country code. | | WorkphoneMobileCountryCodeNumber | [International country calling code](https://en.wikipedia.org/wiki/List_of_country_calling_codes) | Constant or expression mapping whose output is a valid country code (without the + sign). |
- | WorkphoneMobileNumber | Full phone number including the area code | Map to *mobile* attribute. Use regex to remove whitespace, brackets and country code. See example below. |
+ | WorkphoneMobileNumber | Full phone number including the area code | Map to *mobile* attribute. Use regex to remove whitespace, brackets and country code. |
> [!NOTE] > When invoking the Change_Work_Contact Workday web service, Azure AD sends the following constant values: <br>
In this section, you will configure how writeback attributes flow from Azure AD
> You will encounter writeback failures if your Workday tenant uses different Type_IDs. To prevent such failures, you can use the Workday **Maintain Reference IDs** task and update the Type_IDs to match the values used by Azure AD. <br> >
- **Reference regex expressions - Example 1**
-
- Use the below regular expression, if phone number in Azure AD is set using the format required for Self Service Password Reset (SSPR). <br>
- Example: if the phone number value is +1 1112223333 -> then the regex expression will output 1112223333
-
- ```C#
- Replace([telephoneNumber], , "\\+(?<isdCode>\\d* )(?<phoneNumber>\\d{10})", , "${phoneNumber}", , )
- ```
-
- **Reference regex expressions - Example 2**
-
- Use the below regular expression, if phone number in Azure AD is set using the format (XXX) XXX-XXXX. <br>
- Example: if the phone number value is (111) 222-3333 -> then the regex expression will output 1112223333
-
- ```C#
- Replace([mobile], , "[()\\s-]+", , "", , )
- ```
- 6. To save your mappings, click **Save** at the top of the Attribute-Mapping section.
+## Writeback expression mapping examples
+This section provides examples for configuring the Workday Writeback application for common integration scenarios.
+
+* [Timing the writeback for pre-hires](#timing-the-writeback-for-pre-hires)
+* [Handling phone number with country code and phone number](#handling-phone-number-with-country-code-and-phone-number)
+* [Derive country codes from Azure AD *usageLocation* attribute](#derive-country-codes-from-azure-ad-usagelocation-attribute)
+* [Extracting a 10-digit phone number](#extracting-a-10-digit-phone-number)
+* [Removing spaces, dashes and brackets in a phone number](#removing-spaces-dashes-and-brackets-in-a-phone-number)
+* [Handling landline phone number extensions](#handling-landline-phone-number-extensions)
+
+### Timing the writeback for pre-hires
+
+In a typical Workday integration with Azure AD, inbound user provisioning app - [Workday to on-premises Active Directory](workday-inbound-tutorial.md) or [Workday to Azure AD](workday-inbound-cloud-only-tutorial.md) - creates a new Azure AD account for pre-hires generating unique email and userPrincipalName for the user.
+
+By default, the Workday Writeback app will try to set the work email and userID values on the Workday account immediately after the user is created in Azure AD.
+
+If you want to delay the UserID or Email writeback so that it happens on or after hire date, follow the steps given below.
+
+ 1) There is an attribute in Azure AD called *employeeHireDate* in which you can capture the user's employment start date.
+ 1) If you are using [Workday to on-premises Active Directory](workday-inbound-tutorial.md) provisioning job, configure it to flow the Workday *StatusHireDate* field to an attribute in on-premises Active Directory (e.g. *extensionAttribute8*). Configure AAD Connect to sync the on-premises value to *employeeHireDate* in Azure AD.
+ 1) If you are using [Workday to Azure AD](workday-inbound-cloud-only-tutorial.md) provisioning job, configure it to flow the Workday *StatusHireDate* field directly to *employeeHireDate* attribute in Azure AD.
+ > [!NOTE]
+ > If you are storing the employee start date in any other Azure AD *extensionAttribute*, you can use that attribute instead of *employeeHireDate* in the expression below.
+ 1) In your Workday Writeback application, use the following expression rule to export AAD userPrincipalName to Workday UserID field.
+ ```C#
+ IgnoreFlowIfNullOrEmpty(IIF(DateDiff("d", Now(), CDate([employeeHireDate])) >= 0, "", [userPrincipalName]))
+ ```
+ The expression above uses the [DateDiff](../app-provisioning/functions-for-customizing-application-data.md#datediff) function to evaluate the difference between *employeeHireDate* and today's date in UTC obtained using [Now](../app-provisioning/functions-for-customizing-application-data.md#now) function. If *employeeHireDate* is greater than or equal to today's date, then it updates the UserID. Else it returns an empty value and the [IgnoreFlowIfNullOrEmpty](../app-provisioning/functions-for-customizing-application-data.md#ignoreflowifnullorempty) function excludes this attribute from Writeback.
+
+> [!IMPORTANT]
+> For the delayed Writeback to work as expected, an operation in on-premises AD or Azure AD must trigger a change to the user just a day before the arrival or on the hire date, so that this user's profile is updated and is considered for Writeback.
++
+### Handling phone number with country code and phone number
+For the phone number writeback operation to be successful, it is important to send the right country code name and country code number. The country code name is a three-letter code that complies with [ISO 3166-1 format](https://en.wikipedia.org/wiki/ISO_3166-1_alpha-3), while country code number refers to the country calling code or [international subscriber dialing (ISD) code](https://en.wikipedia.org/wiki/List_of_country_calling_codes) for that country.
+
+This example assumes that the phone number value in Azure AD for *telephoneNumber* or *mobile* has the format `+<isdCode><space><phoneNumber>`. <br>
+Example: If the phone number value is set to `+1 1112223333` or `+1 (111) 222-3333`, then `1` is the ISD Code and the country code name corresponding to it is `USA`.
+
+Use the regular expression mappings below to send the right country code name and country code number to Workday. You can use either *telphoneNumber* or *mobile* as the source attribute. The examples below use *telephoneNumber*. All expressions here use the [Replace](../app-provisioning/functions-for-customizing-application-data.md#replace) function.
+
+**Example mapping for *WorkphoneLandlineNumber* or *WorkphoneMobileNumber***
+
+```C#
+Replace(Replace([telephoneNumber], , "\\+(?<isdCode>\\d* )(?<phoneNumber>.*)", , "${phoneNumber}", , ), ,"[()\\s-]+", ,"", , )
+```
+
+**Example mapping for *WorkphoneLandlineCountryCodeNumber* or *WorkphoneMobileCountryCodeNumber***
+
+```C#
+Replace([telephoneNumber], , "\\+(?<isdCode>\\d*) (?<phoneNumber>.*)", , "${isdCode}", , )
+```
+
+**Example mapping for *WorkphoneLandlineCountryCodeName* or *WorkphoneMobileCountryCodeName***
+
+The expression below extracts the isdCode and uses [Switch](../app-provisioning/functions-for-customizing-application-data.md#switch) function to lookup the right country code name to send to Workday.
+
+```C#
+Switch(Replace([telephoneNumber], , "\\+(?<isdCode>\\d*) (?<phoneNumber>.*)", , "${isdCode}", , ), "USA",
+"93", "AFG", "355", "ALB", "213", "DZA", "376", "AND", "244", "AGO", "54", "ARG", "374", "ARM", "297", "ABW", "61", "AUS", "43", "AUT", "994", "AZE", "973", "BHR", "880", "BGD",
+"375", "BLR", "32", "BEL", "501", "BLZ", "229", "BEN", "975", "BTN", "591", "BOL", "599", "BES", "387", "BIH", "267", "BWA", "55", "BRA", "246", "IOT", "673", "BRN", "359", "BGR",
+"226", "BFA", "257", "BDI", "238", "CPV", "855", "KHM", "237", "CMR", "236", "CAF", "235", "TCD", "56", "CHL", "86", "CHN", "57", "COL", "269", "COM", "242", "COG", "243", "COD",
+"682", "COK", "506", "CRI", "225", "CIV", "385", "HRV", "53", "CUB", "357", "CYP", "420", "CZE", "45", "DNK", "253", "DJI", "593", "ECU", "20", "EGY", "503", "SLV", "240", "GNQ",
+"291", "ERI", "372", "EST", "268", "SWZ", "251", "ETH", "500", "FLK", "298", "FRO", "679", "FJI", "358", "FIN", "33", "FRA", "594", "GUF", "689", "PYF", "241", "GAB", "220", "GMB",
+"995", "GEO", "49", "DEU", "233", "GHA", "350", "GIB", "30", "GRC", "299", "GRL", "590", "GLP", "502", "GTM", "224", "GIN", "245", "GNB", "592", "GUY", "509", "HTI", "504", "HND",
+"852", "HKG", "36", "HUN", "354", "ISL", "91", "IND", "62", "IDN", "98", "IRN", "964", "IRQ", "353", "IRL", "972", "ISR", "39", "ITA", "81", "JPN", "962", "JOR", "254", "KEN", "686",
+"KIR", "850", "PRK", "82", "KOR", "383", "XKX", "965", "KWT", "996", "KGZ", "856", "LAO", "371", "LVA", "961", "LBN", "266", "LSO", "231", "LBR", "218", "LBY", "423", "LIE", "370",
+"LTU", "352", "LUX", "853", "MAC", "261", "MDG", "265", "MWI", "60", "MYS", "960", "MDV", "223", "MLI", "356", "MLT", "692", "MHL", "596", "MTQ", "222", "MRT", "230", "MUS", "262",
+"REU", "52", "MEX", "691", "FSM", "373", "MDA", "377", "MCO", "976", "MNG", "382", "MNE", "212", "MAR", "258", "MOZ", "95", "MMR", "264", "NAM", "674", "NRU", "977", "NPL", "31",
+"NLD", "687", "NCL", "64", "NZL", "505", "NIC", "227", "NER", "234", "NGA", "683", "NIU", "672", "NFK", "389", "MKD", "47", "NOR", "968", "OMN", "92", "PAK", "680", "PLW", "970",
+"PSE", "507", "PAN", "675", "PNG", "595", "PRY", "51", "PER", "63", "PHL", "870", "PCN", "48", "POL", "351", "PRT", "974", "QAT", "40", "ROU", "7", "RUS", "250", "RWA", "290", "SHN",
+"508", "SPM", "685", "WSM", "378", "SMR", "239", "STP", "966", "SAU", "221", "SEN", "381", "SRB", "248", "SYC", "232", "SLE", "65", "SGP", "421", "SVK", "386", "SVN", "677", "SLB",
+"252", "SOM", "27", "ZAF", "211", "SSD", "34", "ESP", "94", "LKA", "249", "SDN", "597", "SUR", "46", "SWE", "41", "CHE", "963", "SYR", "886", "TWN", "992", "TJK", "255", "TZA", "66",
+"THA", "670", "TLS", "228", "TGO", "690", "TKL", "676", "TON", "216", "TUN", "90", "TUR", "993", "TKM", "688", "TUV", "256", "UGA", "380", "UKR", "971", "ARE", "44", "GBR", "1",
+"USA", "598", "URY", "998", "UZB", "678", "VUT", "58", "VEN", "84", "VNM", "681", "WLF", "967", "YEM", "260", "ZMB", "263", "ZWE"
+)
+```
+
+### Derive country codes from Azure AD *usageLocation* attribute
+If you want to set the country code name and country code number in Workday based on the *usageLocation* attribute, then use the expression mappings below to convert the two-letter country code to appropriate three-letter country code name and country code number.
+
+**Example mapping for *WorkphoneLandlineCountryCodeNumber* or *WorkphoneMobileCountryCodeNumber***
+
+```C#
+Switch([usageLocation], "1", "AF", "93", "AX", "358", "AL", "355", "DZ", "213", "AS", "1", "AD", "376", "AO", "244", "AI", "1", "AG", "1", "AR", "54", "AM", "374", "AW", "297", "AU",
+"61", "AT", "43", "AZ", "994", "BS", "1", "BH", "973", "BD", "880", "BB", "1", "BY", "375", "BE", "32", "BZ", "501", "BJ", "229", "BM", "1", "BT", "975", "BO", "591", "BQ", "599",
+"BA", "387", "BW", "267", "BR", "55", "IO", "246", "VG", "1", "BN", "673", "BG", "359", "BF", "226", "BI", "257", "CV", "238", "KH", "855", "CM", "237", "CA", "1", "KY", "1", "CF",
+"236", "TD", "235", "CL", "56", "CN", "86", "CX", "61", "CC", "61", "CO", "57", "KM", "269", "CG", "242", "CD", "243", "CK", "682", "CR", "506", "CI", "225", "HR", "385", "CU", "53",
+"CW", "599", "CY", "357", "CZ", "420", "DK", "45", "DJ", "253", "DM", "1", "DO", "1", "EC", "593", "EG", "20", "SV", "503", "GQ", "240", "ER", "291", "EE", "372", "SZ", "268", "ET",
+"251", "FK", "500", "FO", "298", "FJ", "679", "FI", "358", "FR", "33", "GF", "594", "PF", "689", "GA", "241", "GM", "220", "GE", "995", "DE", "49", "GH", "233", "GI", "350", "GR",
+"30", "GL", "299", "GD", "1", "GP", "590", "GU", "1", "GT", "502", "GG", "44", "GN", "224", "GW", "245", "GY", "592", "HT", "509", "VA", "39", "HN", "504", "HK", "852", "HU", "36",
+"IS", "354", "IN", "91", "ID", "62", "IR", "98", "IQ", "964", "IE", "353", "IM", "44", "IL", "972", "IT", "39", "JM", "1", "JP", "81", "JE", "44", "JO", "962", "KZ", "7", "KE",
+"254", "KI", "686", "KP", "850", "KR", "82", "XK", "383", "KW", "965", "KG", "996", "LA", "856", "LV", "371", "LB", "961", "LS", "266", "LR", "231", "LY", "218", "LI", "423", "LT",
+"370", "LU", "352", "MO", "853", "MG", "261", "MW", "265", "MY", "60", "MV", "960", "ML", "223", "MT", "356", "MH", "692", "MQ", "596", "MR", "222", "MU", "230", "YT", "262", "MX",
+"52", "FM", "691", "MD", "373", "MC", "377", "MN", "976", "ME", "382", "MS", "1", "MA", "212", "MZ", "258", "MM", "95", "NA", "264", "NR", "674", "NP", "977", "NL", "31", "NC",
+"687", "NZ", "64", "NI", "505", "NE", "227", "NG", "234", "NU", "683", "NF", "672", "MK", "389", "MP", "1", "NO", "47", "OM", "968", "PK", "92", "PW", "680", "PS", "970", "PA",
+"507", "PG", "675", "PY", "595", "PE", "51", "PH", "63", "PN", "870", "PL", "48", "PT", "351", "PR", "1", "QA", "974", "RE", "262", "RO", "40", "RU", "7", "RW", "250", "BL", "590",
+"SH", "290", "KN", "1", "LC", "1", "MF", "590", "PM", "508", "VC", "1", "WS", "685", "SM", "378", "ST", "239", "SA", "966", "SN", "221", "RS", "381", "SC", "248", "SL", "232", "SG",
+"65", "SX", "1", "SK", "421", "SI", "386", "SB", "677", "SO", "252", "ZA", "27", "SS", "211", "ES", "34", "LK", "94", "SD", "249", "SR", "597", "SJ", "47", "SE", "46", "CH", "41",
+"SY", "963", "TW", "886", "TJ", "992", "TZ", "255", "TH", "66", "TL", "670", "TG", "228", "TK", "690", "TO", "676", "TT", "1", "TN", "216", "TR", "90", "TM", "993", "TC", "1", "TV",
+"688", "VI", "1", "UG", "256", "UA", "380", "AE", "971", "GB", "44", "UM", "246", "US", "1", "UY", "598", "UZ", "998", "VU", "678", "VE", "58", "VN", "84", "WF", "681", "EH", "212",
+"YE", "967", "ZM", "260", "ZW", "263")
+```
+
+**Example mapping for *WorkphoneLandlineCountryCodeName* or *WorkphoneMobileCountryCodeName***
+
+```C#
+Switch([usageLocation], "USA", "AF", "AFG", "AX", "ALA", "AL", "ALB", "DZ", "DZA", "AS", "ASM", "AD", "AND", "AO", "AGO", "AI", "AIA", "AG", "ATG", "AR", "ARG", "AM", "ARM", "AW",
+"ABW", "AU", "AUS", "AT", "AUT", "AZ", "AZE", "BS", "BHS", "BH", "BHR", "BD", "BGD", "BB", "BRB", "BY", "BLR", "BE", "BEL", "BZ", "BLZ", "BJ", "BEN", "BM", "BMU", "BT", "BTN", "BO",
+"BOL", "BQ", "BES", "BA", "BIH", "BW", "BWA", "BR", "BRA", "IO", "IOT", "VG", "VGB", "BN", "BRN", "BG", "BGR", "BF", "BFA", "BI", "BDI", "CV", "CPV", "KH", "KHM", "CM", "CMR", "CA",
+"CAN", "KY", "CYM", "CF", "CAF", "TD", "TCD", "CL", "CHL", "CN", "CHN", "CX", "CXR", "CC", "CCK", "CO", "COL", "KM", "COM", "CG", "COG", "CD", "COD", "CK", "COK", "CR", "CRI", "CI",
+"CIV", "HR", "HRV", "CU", "CUB", "CW", "CUW", "CY", "CYP", "CZ", "CZE", "DK", "DNK", "DJ", "DJI", "DM", "DMA", "DO", "DOM", "EC", "ECU", "EG", "EGY", "SV", "SLV", "GQ", "GNQ", "ER",
+"ERI", "EE", "EST", "SZ", "SWZ", "ET", "ETH", "FK", "FLK", "FO", "FRO", "FJ", "FJI", "FI", "FIN", "FR", "FRA", "GF", "GUF", "PF", "PYF", "GA", "GAB", "GM", "GMB", "GE", "GEO", "DE",
+"DEU", "GH", "GHA", "GI", "GIB", "GR", "GRC", "GL", "GRL", "GD", "GRD", "GP", "GLP", "GU", "GUM", "GT", "GTM", "GG", "GGY", "GN", "GIN", "GW", "GNB", "GY", "GUY", "HT", "HTI", "VA",
+"VAT", "HN", "HND", "HK", "HKG", "HU", "HUN", "IS", "ISL", "IN", "IND", "ID", "IDN", "IR", "IRN", "IQ", "IRQ", "IE", "IRL", "IM", "IMN", "IL", "ISR", "IT", "ITA", "JM", "JAM", "JP",
+"JPN", "JE", "JEY", "JO", "JOR", "KZ", "KAZ", "KE", "KEN", "KI", "KIR", "KP", "PRK", "KR", "KOR", "XK", "XKX", "KW", "KWT", "KG", "KGZ", "LA", "LAO", "LV", "LVA", "LB", "LBN", "LS",
+"LSO", "LR", "LBR", "LY", "LBY", "LI", "LIE", "LT", "LTU", "LU", "LUX", "MO", "MAC", "MG", "MDG", "MW", "MWI", "MY", "MYS", "MV", "MDV", "ML", "MLI", "MT", "MLT", "MH", "MHL", "MQ",
+"MTQ", "MR", "MRT", "MU", "MUS", "YT", "MYT", "MX", "MEX", "FM", "FSM", "MD", "MDA", "MC", "MCO", "MN", "MNG", "ME", "MNE", "MS", "MSR", "MA", "MAR", "MZ", "MOZ", "MM", "MMR", "NA",
+"NAM", "NR", "NRU", "NP", "NPL", "NL", "NLD", "NC", "NCL", "NZ", "NZL", "NI", "NIC", "NE", "NER", "NG", "NGA", "NU", "NIU", "NF", "NFK", "MK", "MKD", "MP", "MNP", "NO", "NOR", "OM",
+"OMN", "PK", "PAK", "PW", "PLW", "PS", "PSE", "PA", "PAN", "PG", "PNG", "PY", "PRY", "PE", "PER", "PH", "PHL", "PN", "PCN", "PL", "POL", "PT", "PRT", "PR", "PRI", "QA", "QAT", "RE",
+"REU", "RO", "ROU", "RU", "RUS", "RW", "RWA", "BL", "BLM", "SH", "SHN", "KN", "KNA", "LC", "LCA", "MF", "MAF", "PM", "SPM", "VC", "VCT", "WS", "WSM", "SM", "SMR", "ST", "STP", "SA",
+"SAU", "SN", "SEN", "RS", "SRB", "SC", "SYC", "SL", "SLE", "SG", "SGP", "SX", "SXM", "SK", "SVK", "SI", "SVN", "SB", "SLB", "SO", "SOM", "ZA", "ZAF", "SS", "SSD", "ES", "ESP", "LK",
+"LKA", "SD", "SDN", "SR", "SUR", "SJ", "SJM", "SE", "SWE", "CH", "CHE", "SY", "SYR", "TW", "TWN", "TJ", "TJK", "TZ", "TZA", "TH", "THA", "TL", "TLS", "TG", "TGO", "TK", "TKL", "TO",
+"TON", "TT", "TTO", "TN", "TUN", "TR", "TUR", "TM", "TKM", "TC", "TCA", "TV", "TUV", "VI", "VIR", "UG", "UGA", "UA", "UKR", "AE", "ARE", "GB", "GBR", "UM", "UMI", "US", "USA", "UY",
+"URY", "UZ", "UZB", "VU", "VUT", "VE", "VEN", "VN", "VNM", "WF", "WLF", "EH", "ESH", "YE", "YEM", "ZM", "ZMB", "ZW", "ZWE")
+```
+
+### Extracting a 10-digit phone number
+
+Use the below regular expression, if phone number in Azure AD is set using the format required for Self Service Password Reset (SSPR). <br>
+Example: if the phone number value is +1 1112223333 -> then the regex expression will output 1112223333
+
+```C#
+Replace([telephoneNumber], , "\\+(?<isdCode>\\d* )(?<phoneNumber>\\d{10})", , "${phoneNumber}", , )
+```
+### Removing spaces, dashes and brackets in a phone number
+
+Use the below regular expression, if phone number in Azure AD is set using the format (XXX) XXX-XXXX. <br>
+Example: if the phone number value is (111) 222-3333 -> then the regex expression will output 1112223333
+
+```C#
+Replace([mobile], , "[()\\s-]+", , "", , )
+```
+
+### Handling landline phone number extensions
+
+Let's say that all phone numbers in Azure AD have extension numbers and you want to populate the extension numbers in Workday.
+This example assumes that phone numbers are stored in the format: `+<isdCode><space><phoneNumber><space>x<extensionNumber>` and the extension number appears after the `x` character. <br>
+
+To extract the components of this phone number, use the expressions below:
+
+**Example mapping for *WorkphoneLandlineNumber***
+
+If *telephoneNumber* has the value `+1 (206) 291-8163 x8125`, the expression below will return `2062918163`.
+```C#
+Replace(Replace([telephoneNumber], , "\+(?<isdCode>\d* )(?<phoneNumber>.* )[x](?<extension>.*)", , "${phoneNumber}", , ), ,"[()\\s-]+", ,"", , )
+```
+
+**Example mapping for *WorkphoneLandlineCountryCodeNumber***
+
+If *telephoneNumber* has the value `+1 (206) 291-8163 x8125`, the expression below will return `1`.
+```C#
+Replace(Replace([telephoneNumber], , "\+(?<isdCode>\d* )(?<phoneNumber>.* )[x](?<extension>.*)", , "${isdCode}", , ), ,"[()\\s-]+", ,"", , )
+```
+
+**Example mapping for *WorkphoneLandlineExtension***
+
+If *telephoneNumber* has the value `+1 (206) 291-8163 x8125`, the expression below will return `8125`.
+```C#
+Replace(Replace([telephoneNumber], , "\+(?<isdCode>\d* )(?<phoneNumber>.* )[x](?<extension>.*)", , "${extension}", , ), ,"[()\\s-]+", ,"", , )
+```
+ ## Enable and launch user provisioning Once the Workday provisioning app configurations have been completed, you can turn on the provisioning service in the Azure portal. > [!TIP]
-> By default when you turn on the provisioning service, it will initiate provisioning operations for all users in scope. If there are errors in the mapping or Workday data issues, then the provisioning job might fail and go into the quarantine state. To avoid this, as a best practice, we recommend configuring **Source Object Scope** filter and testing your attribute mappings with a few test users before launching the full sync for all users. Once you have verified that the mappings work and are giving you the desired results, then you can either remove the filter or gradually expand it to include more users.
+> By default when you turn on the provisioning service, it will initiate provisioning operations for all users in scope. If there are errors in the mapping or Workday data issues, then the provisioning job might fail and go into the quarantine state. To avoid this, as a best practice, we recommend configuring **Source Object Scope** filter and testing your attribute mappings with a few test users using the [provision on demand](../app-provisioning/provision-on-demand.md) feature before launching the full sync for all users. Once you have verified that the mappings work and are giving you the desired results, then you can either remove the filter or gradually expand it to include more users.
1. In the **Provisioning** tab, set the **Provisioning Status** to **On**.
Once the Workday provisioning app configurations have been completed, you can tu
> [!div class="mx-imgBorder"] > ![Provisioning progress bar](./media/sap-successfactors-inbound-provisioning/prov-progress-bar-stats.png) + ## Known issues and limitations * The Writeback app uses a pre-defined value for parameters **Communication_Usage_Type_ID** and **Phone_Device_Type_ID**. If your Workday tenant is using a different value for these attributes, then the Writeback operation will not succeed. A suggested workaround is to update the Type_IDs in Workday.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Title: What's new for Azure Active Directory Verifiable Credentials (preview) description: Recent updates for Azure Active Directory Verifiable Credentials -+ Previously updated : 04/27/2022 Last updated : 05/10/2022
This article lists the latest features, improvements, and changes in the Azure Active Directory (Azure AD) Verifiable Credentials service.
-## April
+## May 2022
+
+We are expanding our service to all Azure AD customers! Verifiable credentials are now available to everyone with an Azure AD subscription (Free and Premium). Existing tenants that configured the Verifiable Credentials service prior to May 4, 2022 must make a [small change](verifiable-credentials-faq.md#updating-the-vc-service-configuration) to avoid service disruptions.
+
+## April 2022
Starting next month, we are rolling out exciting changes to the subscription requirements for the Verifiable Credentials service. Administrators must perform a small configuration change before **May 4, 2022** to avoid service disruptions. Follow [these steps](verifiable-credentials-faq.md?#updating-the-vc-service-configuration) to apply the required configuration changes. >[!IMPORTANT] > If changes are not applied before **May 4, 2022**, you will experience errors on issuance and presentation for your application or service using the Azure AD Verifiable Credentials Service. [Update service configuration instructions](verifiable-credentials-faq.md?#updating-the-vc-service-configuration). ## March 2022+ - Azure AD Verifiable Credentials customers can now change the [domain linked](how-to-dnsbind.md) to their DID easily from the Azure portal. - We made updates to Microsoft Authenticator that change the interaction between the Issuer of a verifiable credential and the user presenting the verifiable credential. This update forces all Verifiable Credentials to be reissued in Microsoft Authenticator for iOS. [More information](whats-new.md?#microsoft-authenticator-did-generation-update)
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/certificate-rotation.md
Title: Certificate Rotation in Azure Kubernetes Service (AKS)
description: Learn certificate rotation in an Azure Kubernetes Service (AKS) cluster. Previously updated : 3/29/2022 Last updated : 5/10/2022 # Certificate rotation in Azure Kubernetes Service (AKS)
Auto certificate rotation won't be enabled on a non-RBAC cluster.
## Manually rotate your cluster certificates > [!WARNING]
-> Rotating your certificates using `az aks rotate-certs` will recreate all of your nodes and their OS Disks and can cause up to 30 minutes of downtime for your AKS cluster.
+> Rotating your certificates using `az aks rotate-certs` will recreate all of your nodes, VM scale set and their Disks and can cause up to 30 minutes of downtime for your AKS cluster.
Use [az aks get-credentials][az-aks-get-credentials] to sign in to your AKS cluster. This command also downloads and configures the `kubectl` client certificate on your local machine.
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
The maximum number of pods per node in an AKS cluster is 250. The *default* maxi
### Configure maximum - new clusters
-You're able to configure the maximum number of pods per node at cluster deployment time or as you add new node pools. If you deploy with the Azure CLI or with a Resource Manager template, you can set the maximum pods per node value as high as 250.
+You're able to configure the maximum number of pods per node at cluster deployment time or as you add new node pools. You can set the maximum pods per node value as high as 250.
If you don't specify maxPods when creating new node pools, you receive a default value of 30 for Azure CNI.
A minimum value for maximum pods per node is enforced to guarantee space for sys
* **Azure CLI**: Specify the `--max-pods` argument when you deploy a cluster with the [az aks create][az-aks-create] command. The maximum value is 250. * **Resource Manager template**: Specify the `maxPods` property in the [ManagedClusterAgentPoolProfile] object when you deploy a cluster with a Resource Manager template. The maximum value is 250.
-* **Azure portal**: You can't change the maximum number of pods per node when you deploy a cluster with the Azure portal. Azure CNI networking clusters are limited to 110 pods per node when you deploy using the Azure portal.
+* **Azure portal**: Change the `Max pods per node` field in the node pool settings when creating a cluster or adding a new node pool.
### Configure maximum - existing clusters
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
The following limitations apply when you create and manage AKS clusters that sup
* See [Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)][quotas-skus-regions]. * You can delete system node pools, provided you have another system node pool to take its place in the AKS cluster. * System pools must contain at least one node, and user node pools may contain zero or more nodes.
-* The AKS cluster must use the Standard SKU load balancer to use multiple node pools, the feature is not supported with Basic SKU load balancers.
+* The AKS cluster must use the Standard SKU load balancer to use multiple node pools, the feature isn't supported with Basic SKU load balancers.
* The AKS cluster must use virtual machine scale sets for the nodes. * You can't change the VM size of a node pool after you create it. * The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter. For Linux node pools the length must be between 1 and 12 characters, for Windows node pools the length must be between 1 and 6 characters.
A workload may require splitting a cluster's nodes into separate pools for logic
* All subnets assigned to nodepools must belong to the same virtual network. * System pods must have access to all nodes/pods in the cluster to provide critical functionality such as DNS resolution and tunneling kubectl logs/exec/port-forward proxy. * If you expand your VNET after creating the cluster you must update your cluster (perform any managed cluster operation but node pool operations don't count) before adding a subnet outside the original cidr. AKS will error out on the agent pool add now though we originally allowed it. If you don't know how to reconcile your cluster file a support ticket.
-* Azure Network Policy is not supported.
-* Kube-proxy is designed for a single contiguous CIDR and optimizes rules based on that value. When using multiple non-contiguous ranges, these optimizations cannot occur. See this [K.E.P.](https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/2450-Remove-knowledge-of-pod-cluster-CIDR-from-iptables-rules) and the documentation for the [`--cluster-cidr` `kube-proxy` argument](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/) for more details. In clusters configured with Azure CNI, `kube-proxy` will be configured with the subnet of the first node pool at cluster creation.
+* In clusters with Kubernetes version < 1.23.3, kube-proxy will SNAT traffic from new subnets, which can cause Azure Network Policy to drop the packets.
+* Windows nodes will SNAT traffic to the new subnets until the nodepool is reimaged.
+* Internal load balancers default to one of the node pool subnets (usually the first subnet of the node pool at cluster creation). To override this behavior, you can [specify the load balancer's subnet explicitly using an annotation][internal-lb-different-subnet].
To create a node pool with a dedicated subnet, pass the subnet resource ID as an additional parameter when creating a node pool.
The valid Kubernetes upgrades for a cluster's control plane and node pools are v
* Rules for valid versions to upgrade node pools: * The node pool version must have the same *major* version as the control plane. * The node pool *minor* version must be within two *minor* versions of the control plane version.
- * The node pool version cannot be greater than the control `major.minor.patch` version.
+ * The node pool version can't be greater than the control `major.minor.patch` version.
* Rules for submitting an upgrade operation:
- * You cannot downgrade the control plane or a node pool Kubernetes version.
- * If a node pool Kubernetes version is not specified, behavior depends on the client being used. Declaration in Resource Manager templates falls back to the existing version defined for the node pool if used, if none is set the control plane version is used to fall back on.
- * You can either upgrade or scale a control plane or a node pool at a given time, you cannot submit multiple operations on a single control plane or node pool resource simultaneously.
+ * You can't downgrade the control plane or a node pool Kubernetes version.
+ * If a node pool Kubernetes version isn't specified, behavior depends on the client being used. Declaration in Resource Manager templates falls back to the existing version defined for the node pool if used, if none is set the control plane version is used to fall back on.
+ * You can either upgrade or scale a control plane or a node pool at a given time, you can't submit multiple operations on a single control plane or node pool resource simultaneously.
## Scale a node pool manually
Associating a node pool with an existing capacity reservation group can be done
```azurecli-interactive az aks nodepool add -g MyRG --cluster-name MyMC -n myAP --capacityReservationGroup myCRG ```
-Associating a system node pool with an existing capacity reservation group can be done using [az aks create][az-aks-create] command. If the capacity reservation group specified does not exist, then a warning is issued and the cluster gets created without any capacity reservation group association.
+Associating a system node pool with an existing capacity reservation group can be done using [az aks create][az-aks-create] command. If the capacity reservation group specified doesn't exist, then a warning is issued and the cluster gets created without any capacity reservation group association.
```azurecli-interactive az aks create -g MyRG --cluster-name MyMC --capacityReservationGroup myCRG
FIPS-enabled node pools have the following limitations:
* Currently, you can only have FIPS-enabled Linux-based node pools running on Ubuntu 18.04. * FIPS-enabled node pools require Kubernetes version 1.19 and greater. * To update the underlying packages or modules used for FIPS, you must use [Node Image Upgrade][node-image-upgrade].
-* Container Images on the FIPS nodes have not been assessed for FIPS compliance.
+* Container Images on the FIPS nodes haven't been assessed for FIPS compliance.
> [!IMPORTANT] > The FIPS-enabled Linux image is a different image than the default Linux image used for Linux-based node pools. To enable FIPS on a node pool, you must create a new Linux-based node pool. You can't enable FIPS on existing node pools.
To verify your node pool is FIPS-enabled, use [az aks show][az-aks-show] to chec
az aks show --resource-group myResourceGroup --cluster-name myAKSCluster --query="agentPoolProfiles[].{Name:name enableFips:enableFips}" -o table ```
-The following example output shows the *fipsnp* node pool is FIPS-enabled and *nodepool1* is not.
+The following example output shows the *fipsnp* node pool is FIPS-enabled and *nodepool1* isn't.
```output Name enableFips
Edit these values as need to update, add, or delete node pools as needed:
} ```
-Deploy this template using the [az deployment group create][az-deployment-group-create] command, as shown in the following example. You are prompted for the existing AKS cluster name and location:
+Deploy this template using the [az deployment group create][az-deployment-group-create] command, as shown in the following example. You're prompted for the existing AKS cluster name and location:
```azurecli-interactive az deployment group create \
It may take a few minutes to update your AKS cluster depending on the node pool
## Assign a public IP per node for your node pools
-AKS nodes do not require their own public IP addresses for communication. However, scenarios may require nodes in a node pool to receive their own dedicated public IP addresses. A common scenario is for gaming workloads, where a console needs to make a direct connection to a cloud virtual machine to minimize hops. This scenario can be achieved on AKS by using Node Public IP.
+AKS nodes don't require their own public IP addresses for communication. However, scenarios may require nodes in a node pool to receive their own dedicated public IP addresses. A common scenario is for gaming workloads, where a console needs to make a direct connection to a cloud virtual machine to minimize hops. This scenario can be achieved on AKS by using Node Public IP.
First, create a new resource group.
Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
[use-tags]: use-tags.md [use-labels]: use-labels.md [cordon-and-drain]: resize-node-pool.md#cordon-the-existing-nodes
+[internal-lb-different-subnet]: internal-lb.md#specify-a-different-subnet
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
Pods can experience disruption due to [various](https://kubernetes.io/docs/conce
Consider using [Pod Disruption Budgets](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets) to enforce a minimum number of pods to be available at any given time.
+## Security
+The self-hosted gateway is able to run as non-root in Kubernetes allowing customers to run the gateway securely.
+
+Here is an example of the security context for the self-hosted gateway:
+```yml
+securityContext:
+ allowPrivilegeEscalation: false
+ runAsNonRoot: true
+ runAsUser: 1001 # This is a built-in user, but you can use any user ie 1000 as well
+ runAsGroup: 2000 # This is just an example
+ privileged: false
+ capabilities:
+ drop:
+ - all
+```
+
+> [!WARNING]
+> Running the self-hosted gateway with read-only filesystem (`readOnlyRootFilesystem: true`) is not supported.
+
+> [!WARNING]
+> When using local CA certificates, the self-hosted gateway must run with user ID (UID) `1001` in order to manage the CA certificates otherwise the gateway will not start up.
+ ## Next steps * To learn more about the self-hosted gateway, see [Self-hosted gateway overview](self-hosted-gateway-overview.md).
api-management Howto Protect Backend Frontend Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md
Open the Azure AD B2C blade in the portal and do the following steps.
1. Two extra areas in the function app need to be configured (Authorization and Network Restrictions). 1. Firstly Let's configure Authentication / Authorization, so navigate back to the root blade of the function app via the breadcrumb.
-1. Next select 'Authentication / Authorization' (under 'Settings').
-1. Turn on the App Service Authentication feature.
-1. Set the Action to take when request is not authenticated dropdown to "Log in with Azure Active Directory".
-1. Under 'Authentication Providers', choose ΓÇÿAzure Active DirectoryΓÇÖ.
-1. Choose ΓÇÿAdvancedΓÇÖ from the Management Mode switch.
-1. Paste the Backend application's [Application] Client ID (from Azure AD B2C) into the ΓÇÿClient IDΓÇÖ box
+1. Next select 'Authentication' (under 'Settings').
+1. Click 'Add Identity Provider'
+1. From the Identity Provider dropdown, select 'Microsoft'
+1. For App Registration, select 'Provide the details of an existing app registration'
+1. Paste the Backend application's client ID (from Azure AD B2C) into the ΓÇÿApplication (client) IDΓÇÖ box (we recorded this configuration earlier).
1. Paste the Well-known open-id configuration endpoint from the sign-up and sign-in policy into the Issuer URL box (we recorded this configuration earlier).
-1. Click 'Show Secret' and paste the Backend application's client secret into the appropriate box.
-1. Select OK, which takes you back to the identity provider selection blade/screen.
-1. Leave [Token Store](../app-service/overview-authentication-authorization.md#token-store) enabled under advanced settings (default).
+1. Paste the Backend application's client secret into the appropriate box (we recorded this configuration earlier).
+1. For 'Unauthenticated requests', select 'HTTP 401 Unauthorized: recommended for APIs'
+1. Leave [Token Store](../app-service/overview-authentication-authorization.md#token-store) enabled (default).
1. Click 'Save' (at the top left of the blade). > [!IMPORTANT] > Now your Function API is deployed and should throw 401 responses if the correct JWT is not supplied as an Authorization: Bearer header, and should return data when a valid request is presented.
- > You added additional defense-in-depth security in EasyAuth by configuring the 'Login With Azure AD' option to handle unauthenticated requests. Be aware that this will change the unauthorized request behavior between the Backend Function App and Frontend SPA as EasyAuth will issue a 302 redirect to Azure Active Directory instead of a 401 Not Authorized response, we will correct this by using API Management later.
+ > You added additional defense-in-depth security in EasyAuth by configuring the 'Login With Azure AD' option to handle unauthenticated requests.
> > We still have no IP security applied, if you have a valid key and OAuth2 token, anyone can call this from anywhere - ideally we want to force all requests to come via API Management. > > If you're using APIM Consumption tier then [there isn't a dedicated Azure API Management Virtual IP](./api-management-howto-ip-addresses.md#ip-addresses-of-consumption-tier-api-management-service) to allow-list with the functions access-restrictions. In the Azure API Management Standard SKU and above [the VIP is single tenant and for the lifetime of the resource](./api-management-howto-ip-addresses.md#changes-to-the-ip-addresses). For the Azure API Management Consumption tier, you can lock down your API calls via the shared secret function key in the portion of the URI you copied above. Also, for the Consumption tier - steps 12-17 below do not apply.
-1. Close the 'Authentication / Authorization' blade
+1. Close the 'Authentication' blade from the App Service / Functions portal.
1. Open the *API Management blade of the portal*, then open *your instance*. 1. Record the Private VIP shown on the overview tab. 1. Return to the *Azure Functions blade of the portal* then open *your instance* again.
The steps above can be adapted and edited to allow many different uses of Azure
* Check out more [videos](https://azure.microsoft.com/documentation/videos/index/?services=api-management) about API Management. * For other ways to secure your back-end service, see [Mutual Certificate authentication](api-management-howto-mutual-certificates.md). * [Create an API Management service instance](get-started-create-service-instance.md).
-* [Manage your first API](import-and-publish.md).
+* [Manage your first API](import-and-publish.md).
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
Deploying self-hosted gateways into the same environments where the backend API
The self-hosted gateway is a containerized, functionally equivalent version of the managed gateway deployed to Azure as part of every API Management service. The self-hosted gateway is available as a Linux-based Docker [container image](https://aka.ms/apim/sputnik/dhub) from the Microsoft Container Registry. It can be deployed to Docker, Kubernetes, or any other container orchestration solution running on a server cluster on premises, cloud infrastructure, or for evaluation and development purposes, on a personal computer. You can also deploy the self-hosted gateway as a cluster extension to an [Azure Arc-enabled Kubernetes cluster](./how-to-deploy-self-hosted-gateway-azure-arc.md).
+### Known limitations
+ The following functionality found in the managed gateways is **not available** in the self-hosted gateways: - Sending resource logs (diagnostic logs) to Azure Monitor. However, you can [send metrics](how-to-configure-cloud-metrics-logs.md) to Azure Monitor, or [configure and persist logs locally](how-to-configure-local-metrics-logs.md) where the self-hosted gateway is deployed.
app-service Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md
To create Azure resources in VS Code, you must have the [Azure Tools extension p
> [!div class="nextstepaction"] > [Download Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack)
+In the application folder, open VS Code:
+```Console
+code .
+```
+ | Instructions | Screenshot | |:-|--:|
-| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-1-240px.png" alt-text="A Screenshot of the Azure Tools icon in the left toolbar of VS Code." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-1.png"::: |
-| [!INCLUDE [Create app service step 2](<./includes/quickstart-python/create-app-service-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-2-240px.png" alt-text="A screenshot of the App Service section of Azure Tools extension and the context menu used to create a new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-2.png"::: |
-| [!INCLUDE [Create app service step 4](<./includes/quickstart-python/create-app-service-visual-studio-code-3.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-3-240px.png" alt-text="A screenshot of dialog box used to enter the name of the new web app in Visual Studio Code." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-3.png"::: |
-| [!INCLUDE [Create app service step 5](<./includes/quickstart-python/create-app-service-visual-studio-code-4.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-4-240px.png" alt-text="A screenshot of the dialog box in VS Code used to select the runtime for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-4.png"::: |
-| [!INCLUDE [Create app service step 6](<./includes/quickstart-python/create-app-service-visual-studio-code-5.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-5-240px.png" alt-text="A screenshot of the dialog in VS Code used to select the App Service plan for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-5.png"::: |
+| [!INCLUDE [Create app service step 1](<./includes/quickstart-python/create-app-service-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-1-240-px.png" alt-text="A Screenshot of the Azure Tools icon in the left toolbar of VS Code." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-1.png"::: |
+| [!INCLUDE [Create app service step 2](<./includes/quickstart-python/create-app-service-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-2-240-px.png" alt-text="A screenshot of the App Service section of Azure Tools extension and the context menu used to create a new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-2.png"::: |
+| [!INCLUDE [Create app service step 3](<./includes/quickstart-python/create-app-service-visual-studio-code-3.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-3-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to select the folder to deploy for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-3.png"::: |
+| [!INCLUDE [Create app service step 4](<./includes/quickstart-python/create-app-service-visual-studio-code-4.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-4-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to select Create a new Web App." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-4.png"::: |
+| [!INCLUDE [Create app service step 5](<./includes/quickstart-python/create-app-service-visual-studio-code-5.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-5-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to enter the globally unique name for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-5.png"::: |
+| [!INCLUDE [Create app service step 6](<./includes/quickstart-python/create-app-service-visual-studio-code-6.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-6-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to select the runtime stack for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-6.png"::: |
+| [!INCLUDE [Create app service step 7](<./includes/quickstart-python/create-app-service-visual-studio-code-7.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-7-240-px.png" alt-text="A screenshot of the dialog box in VS Code used to a pricing tier for the new web app." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-7.png"::: |
+| [!INCLUDE [Create app service step 8](<./includes/quickstart-python/create-app-service-visual-studio-code-8.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-8-240-px.png" alt-text="A screenshot of a dialog box in VS Code asking if you want to update your workspace to run build commands." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-8.png"::: |
+| [!INCLUDE [Create app service step 9](<./includes/quickstart-python/create-app-service-visual-studio-code-9.md>)] | :::image type="content" source="./media/quickstart-python/create-app-service-visual-studio-code-9-240-px.png" alt-text="A screenshot showing the confirmation dialog when the app code has been deployed to Azure." lightbox="./media/quickstart-python/create-app-service-visual-studio-code-9.png"::: |
### [Azure CLI](#tab/azure-cli)
Azure App service supports multiple methods to deploy your application code to A
### [Deploy using VS Code](#tab/vscode-deploy)
-To deploy a web app from VS Code, you must have the [Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack) installed and be signed into Azure from VS Code.
-
-> [!div class="nextstepaction"]
-> [Download Azure Tools extension pack](https://marketplace.visualstudio.com/items?itemName=ms-vscode.vscode-node-azure-pack)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [VS Code deploy step 1](<./includes/quickstart-python/deploy-visual-studio-code-1.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-1-240px.png" alt-text="A screenshot of the location of the Azure Tools icon in the left toolbar of VS Code." lightbox="./media/quickstart-python/deploy-visual-studio-code-1.png"::: |
-| [!INCLUDE [VS Code deploy step 2](<./includes/quickstart-python/deploy-visual-studio-code-2.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-2-240px.png" alt-text="A screenshot of the context menu of an App Service and the deploy to web app menu option." lightbox="./media/quickstart-python/deploy-visual-studio-code-2.png"::: |
-| [!INCLUDE [VS Code deploy step 3](<./includes/quickstart-python/deploy-visual-studio-code-3.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-3-240px.png" alt-text="A screenshot dialog in VS Code used to choose the app to deploy." lightbox="./media/quickstart-python/deploy-visual-studio-code-3.png"::: |
-| [!INCLUDE [VS Code deploy step 4](<./includes/quickstart-python/deploy-visual-studio-code-4.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-4-240px.png" alt-text="A screenshot of a dialog box in VS Code asking if you want to update your workspace to run build commands." lightbox="./media/quickstart-python/deploy-visual-studio-code-4.png"::: |
-| [!INCLUDE [VS Code deploy step 5](<./includes/quickstart-python/deploy-visual-studio-code-5.md>)] | :::image type="content" source="./media/quickstart-python/deploy-visual-studio-code-5-240px.png" alt-text="A screenshot showing the confirmation dialog when the app code has been deployed to Azure." lightbox="./media/quickstart-python/deploy-visual-studio-code-5.png"::: |
### [Deploy using Azure CLI](#tab/azure-cli-deploy)
application-gateway How To Troubleshoot Application Gateway Session Affinity Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/how-to-troubleshoot-application-gateway-session-affinity-issues.md
Learn how to diagnose and resolve session affinity issues with Azure Application
## Overview
-The cookie-based session affinity feature is useful when you want to keep a user session on the same server. By using gateway-managed cookies, the Application Gateway can direct subsequent traffic from a user session to the same server for processing. This is important in cases where session state is saved locally on the server for a user session.
+The cookie-based session affinity feature is useful to keep a user session on the same server. By using gateway-managed cookies, the Application Gateway can direct subsequent traffic from a user session to the same server for processing. This is important in cases where session state is saved locally on the server for a user session. Session affinity is also known as sticky sessions.
> [!NOTE] > Application Gateway v1 issues a cookie called ARRAffinity, which is used to direct traffic to the same backend pool member. In Application Gateway v2, this cookie has been renamed to ApplicationGatewayAffinity. For the purposes of this document, ApplicationGatewayAffinity will be used as an example, ARRAffinity can be substituted in where applicable for Application Gateway v1 instances.
application-gateway Overview V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/overview-v2.md
The new v2 SKU includes the following enhancements:
- **Key Vault Integration**: Application Gateway v2 supports integration with Key Vault for server certificates that are attached to HTTPS enabled listeners. For more information, see [TLS termination with Key Vault certificates](key-vault-certs.md). - **Mutual Authentication (mTLS)**: Application Gateway v2 supports authentication of client requests. For more information, see [Overview of mutual authentication with Application Gateway](mutual-authentication-overview.md). - **Azure Kubernetes Service Ingress Controller**: The Application Gateway v2 Ingress Controller allows the Azure Application Gateway to be used as the ingress for an Azure Kubernetes Service (AKS) known as AKS Cluster. For more information, see [What is Application Gateway Ingress Controller?](ingress-controller-overview.md).
+- **Private link**: The v2 SKU offers private connectivity from other virtual networks in other regions and subscriptions through the use of private endpoints.
- **Performance enhancements**: The v2 SKU offers up to 5X better TLS offload performance as compared to the Standard/WAF SKU. - **Faster deployment and update time** The v2 SKU provides faster deployment and update time as compared to Standard/WAF SKU. This also includes WAF configuration changes.
The following table compares the features available with each SKU.
| URL-based routing | &#x2713; | &#x2713; | | Multiple-site hosting | &#x2713; | &#x2713; | | Mutual Authentication (mTLS) | | &#x2713; |
+| Private Link support | | &#x2713; |
| Traffic redirection | &#x2713; | &#x2713; | | Web Application Firewall (WAF) | &#x2713; | &#x2713; | | WAF custom rules | | &#x2713; |
application-gateway Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link-configure.md
+
+ Title: Configure Azure Application Gateway Private Link
+description: This article shows you how to configure Application Gateway Private Link.
++++ Last updated : 05/09/2022++++
+# Configure Azure Application Gateway Private Link
+
+Application Gateway Private Link allows you to connect your workloads over a private connection spanning across VNets and subscriptions. For more information, see [Application Gateway Private Link](private-link.md).
+++
+## Configuration options
+
+Application Gateway Private Link can be configured via multiple options, such as, but not limited to, the Azure portal, Azure PowerShell, and Azure CLI.
+
+# [Azure portal](#tab/portal)
+
+**Define a subnet for Private Link Configuration**
+
+To enable Private Link Configuration, a subnet, different from the Application Gateway subnet, is required for the private link IP configuration. Private Link must use a subnet that doesn't contain any Application Gateways. Subnet sizing can be determined by the number of connections required for your deployment. Each IP address allocated to this subnet ensures 64-K concurrent TCP connections that can be established via Private Link at single point in time. Allocate more IP addresses to allow more connections via Private Link. For example: `n * 64K`; where `n` is the number of IP addresses being provisioned.
+
+> [!Note]
+> The maximum number of IP addresses per private link configuration is eight. Only dynamic allocation is supported.
+
+The following steps can be completed to create a new subnet:
+
+[Add, change, or delete a virtual network subnet](../virtual-network/virtual-network-manage-subnet.md#add-a-subnet)
+
+**Configure Private Link**
+
+The Private link configuration defines the infrastructure used by Application Gateway to enable connections from Private Endpoints. To create the Private link configuration, complete the following steps:
+
+1. Go to the [Azure portal](https://portal.azure.com)
+1. Search for and select **Application Gateways**.
+1. Select the name of the application gateway you want to enable private link.
+1. Select **Private link**
+1. Configure the following items:
+
+ - **Name**: The name of the private link configuration.
+ - **Private link subnet**: The subnet IP addresses should be consumed from.
+ - **Frontend IP Configuration**: The frontend IP address that private link should forward traffic to on Application Gateway.
+ - **Private IP address settings**: specify at least one IP address
+1. Select **Add**.
+
+**Configure Private Endpoint**
+
+A private endpoint is a network interface that uses a private IP address from the virtual network containing clients wishing to connect to your gateway. Each of the clients will use the private IP address of the Private Endpoint to tunnel traffic to the Application Gateway. To create a private endpoint, complete the following steps:
+
+1. Select the **Private endpoint connections** tab.
+1. Select **Create**.
+1. On the **Basics** tab, configure a resource group, name, and region for the Private Endpoint. Select **Next**.
+1. On the **Resource** tab, select **Next**.
+1. On the **Virtual Network** tab, configure a virtual network and subnet where the private endpoint network interface should be provisioned to. Configure whether the private endpoint should have a dynamic or static IP address. Last, configure if you want a new private link zone to be created to automatically manage IP addressing. Select **Next**.
+1. On the **Tags** tab, optionally configure resource tags. Select **Next**.
+1. Select **Create**.
+
+> [!Note]
+> If the public or private IP configuration resource is missing when trying to select a _Target sub-resource_ on the _Resource_ tab of private endpoint creation, please ensure a listener is actively utilizing the respected frontend IP configuration. Frontend IP configurations without an associated listener will not be shown as a _Target sub-resource_.
+
+# [Azure PowerShell](#tab/powershell)
+
+To configure Private link on an existing Application Gateway via Azure PowerShell, the following commands can be referenced:
+
+```azurepowershell
+# Disable Private Link Service Network Policies
+# https://docs.microsoft.com/azure/private-link/disable-private-endpoint-network-policy
+$net =@{
+ Name = 'AppGW-PL-PSH'
+ ResourceGroupName = 'AppGW-PL-PSH-RG'
+}
+$vnet = Get-AzVirtualNetwork @net
+
+($vnet | Select -ExpandProperty subnets | Where-Object {$_.Name -eq 'AppGW-PL-Subnet'}).PrivateLinkServiceNetworkPolicies = "Disabled"
+
+$vnet | Set-AzVirtualNetwork
+
+# Get Application Gateway Frontend IP Name
+$agw = Get-AzApplicationGateway -Name AppGW-PL-PSH -ResourceGroupName AppGW-PL-PSH-RG
+# List the names
+$agw.FrontendIPConfigurations | Select Name
+
+# Add a new Private Link configuration and associate it with an existing Frontend IP
+$PrivateLinkIpConfiguration = New-AzApplicationGatewayPrivateLinkIpConfiguration `
+ -Name "ipConfig01" `
+ -Subnet ($vnet | Select -ExpandProperty subnets | Where-Object {$_.Name -eq 'AppGW-PL-Subnet'}) `
+ -Primary
+
+# Add the Private Link configuration to the gateway configuration
+Add-AzApplicationGatewayPrivateLinkConfiguration `
+ -ApplicationGateway $agw `
+ -Name "privateLinkConfig01" `
+ -IpConfiguration $PrivateLinkIpConfiguration
+
+# Associate private link configuration to Frontend IP
+$agwPip = ($agw | Select -ExpandProperty FrontendIpConfigurations| Where-Object {$_.Name -eq 'appGwPublicFrontendIp'}).PublicIPAddress.Id
+$privateLinkConfiguration = ($agw | Select -ExpandProperty PrivateLinkConfigurations | Where-Object {$_.Name -eq 'privateLinkConfig01'}).Id
+Set-AzApplicationGatewayFrontendIPConfig -ApplicationGateway $agw -Name "appGwPublicFrontendIp" -PublicIPAddressId $agwPip -PrivateLinkConfigurationId $privateLinkConfiguration
+
+# Apply the change to the gateway
+Set-AzApplicationGateway -ApplicationGateway $agw
+
+# Disable Private Endpoint Network Policies
+# https://docs.microsoft.com/azure/private-link/disable-private-endpoint-network-policy
+$net =@{
+ Name = 'AppGW-PL-Endpoint-PSH-VNET'
+ ResourceGroupName = 'AppGW-PL-Endpoint-PSH-RG'
+}
+$vnet_plendpoint = Get-AzVirtualNetwork @net
+
+($vnet_plendpoint | Select -ExpandProperty subnets | Where-Object {$_.Name -eq 'MySubnet'}).PrivateEndpointNetworkPolicies = "Disabled"
+
+$vnet_plendpoint | Set-AzVirtualNetwork
+
+# Create Private Link Endpoint - Group ID is the same as the frontend IP configuration
+$privateEndpointConnection = New-AzPrivateLinkServiceConnection -Name "AppGW-PL-Connection" -PrivateLinkServiceId $agw.Id -GroupID "appGwPublicFrontendIp"
+
+## Create private endpoint
+New-AzPrivateEndpoint -Name "AppGWPrivateEndpoint" -ResourceGroupName $vnet_plendpoint.ResourceGroupName -Location $vnet_plendpoint.Location -Subnet ($vnet_plendpoint | Select -ExpandProperty subnets | Where-Object {$_.Name -eq 'MySubnet'}) -PrivateLinkServiceConnection $privateEndpointConnection
+```
+A list of all Azure PowerShell references for Private Link Configuration on Application Gateway can be found here:
+- [Get-AzApplicationGatewayPrivateLinkConfiguration](/powershell/module/az.network/get-azapplicationgatewayprivatelinkconfiguration)
+- [New-AzApplicationGatewayPrivateLinkConfiguration](/powershell/module/az.network/new-azapplicationgatewayprivatelinkconfiguration)
+- [New-AzApplicationGatewayPrivateLinkIpConfiguration](/powershell/module/az.network/new-azapplicationgatewayprivatelinkipconfiguration)
+- [Add-AzApplicationGatewayPrivateLinkConfiguration](/powershell/module/az.network/add-azapplicationgatewayprivatelinkconfiguration)
+- [Remove-AzApplicationGatewayPrivateLinkConfiguration](/powershell/module/az.network/remove-azapplicationgatewayprivatelinkconfiguration)
+- [Set-AzApplicationGatewayPrivateLinkConfiguration](/powershell/module/az.network/set-azapplicationgatewayprivatelinkconfiguration)
+
+# [Azure CLI](#tab/cli)
+
+To configure Private link on an existing Application Gateway via Azure CLI, the following commands can be referenced:
+
+```azurecli
+# Disable Private Link Service Network Policies
+# https://docs.microsoft.com/en-us/azure/private-link/disable-private-endpoint-network-policy
+az network vnet subnet update \
+ --name AppGW-PL-Subnet \
+ --vnet-name AppGW-PL-CLI-VNET \
+ --resource-group AppGW-PL-CLI-RG \
+ --disable-private-link-service-network-policies true
+
+# Get Application Gateway Frontend IP Name
+az network application-gateway frontend-ip list \
+ --gateway-name AppGW-PL-CLI \
+ --resource-group AppGW-PL-CLI-RG
+
+# Add a new Private Link configuration and associate it with an existing Frontend IP
+az network application-gateway private-link add \
+ --frontend-ip appGwPublicFrontendIp \
+ --name privateLinkConfig01 \
+ --subnet /subscriptions/XXXXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXX/resourceGroups/AppGW-PL-CLI-RG/providers/Microsoft.Network/virtualNetworks/AppGW-PL-CLI-VNET/subnets/AppGW-PL-Subnet \
+ --gateway-name AppGW-PL-CLI \
+ --resource-group AppGW-PL-CLI-RG
+
+# Get Private Link resource ID
+az network application-gateway private-link list \
+ --gateway-name AppGW-PL-CLI \
+ --resource-group AppGW-PL-CLI-RG
+++
+# Disable Private Endpoint Network Policies
+# https://docs.microsoft.com/en-us/azure/private-link/disable-private-endpoint-network-policy
+az network vnet subnet update \
+ --name MySubnet \
+ --vnet-name AppGW-PL-Endpoint-CLI-VNET \
+ --resource-group AppGW-PL-Endpoint-CLI-RG \
+ --disable-private-endpoint-network-policies true
+
+# Create Private Link Endpoint - Group ID is the same as the frontend IP configuration
+az network private-endpoint create \
+ --name AppGWPrivateEndpoint \
+ --resource-group AppGW-PL-Endpoint-CLI-RG \
+ --vnet-name AppGW-PL-Endpoint-CLI-VNET \
+ --subnet MySubnet \
+ --group-id appGwPublicFrontendIp \
+ --private-connection-resource-id /subscriptions/XXXXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXX/resourceGroups/AppGW-PL-CLI-RG/providers/Microsoft.Network/applicationGateways/AppGW-PL-CLI \
+ --connection-name AppGW-PL-Connection
+```
+
+A list of all Azure CLI references for Private Link Configuration on Application Gateway can be found here: [Azure CLI CLI - Private Link](/cli/azure/network/application-gateway/private-link)
+++
+## Next steps
+
+- Learn about Azure Private Link: [What is Azure Private Link?](../private-link/private-link-overview.md)
application-gateway Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link.md
+
+ Title: Azure Application Gateway Private Link
+description: This article is an overview of Application Gateway Private Link.
++++ Last updated : 05/09/2022++++
+# Application Gateway Private Link
+
+Today, you can deploy your critical workloads securely behind Application Gateway, gaining the flexibility of Layer 7 load balancing features. Access to the backend workloads is possible in two ways:
+
+- Public IP address - your workloads are accessible over the Internet.
+- Private IP address- your workloads are accessible via a private IP address, but within the same VNet as the Application Gateway.
+
+Private Link for Application Gateway allows you to connect workloads over a private connection spanning across VNets and subscriptions. When configured, a private endpoint will be placed into a defined virtual network's subnet, providing a private IP address for clients looking to communicate to the gateway. For a list of other PaaS services that support Private Link functionality, see [What is Azure Private Link?](../private-link/private-link-overview.md).
+++
+## Features and capabilities
+
+Private Link allows you to extend private connectivity to Application Gateway via a Private Endpoint in the following scenarios:
+- VNet in the same or different region from Application Gateway
+- VNet in the same or different subscription from Application Gateway
+- VNet in the same or different subscription and the same or different Azure AD tenant from Application Gateway
+
+You may also choose to block inbound public (Internet) access to Application Gateway and allow access only via private endpoints. Inbound management traffic still needs to be allowed to application gateway. For more information, see [Application Gateway infrastructure configuration](configuration-infrastructure.md#network-security-groups)
+
+All features supported by Application Gateway are supported when accessed through a private endpoint, including support for AGIC.
+
+## Private Link components
+
+Four components are required to implement Private Link with Application Gateway:
+
+- Application Gateway Private Link Configuration
+
+ A Private link configuration can be associated with an Application Gateway Frontend IP address, which can then be used to establish a connection using a Private Endpoint. If there's no association to an Application Gateway frontend IP address, then the Private Link feature won't be enabled.
+
+- Application Gateway Frontend IP address
+
+ The public or private IP address where the Application Gateway Private Link Configuration needs to be associated to enable the Private Link Capabilities.
+
+- Private Endpoint
+
+ An Azure network resource that allocates a private IP address in your VNet address space. It's used to connect to the Application Gateway via the private IP address similar to many other Azure Services like Storage, KeyVault, etc., that provide private link access.
+
+- Private Endpoint Connection
+
+ A connection on Application Gateway originated by Private Endpoints. You can auto-approve, manually approve, or reject connections to grant or deny access.
+
+## Limitations
+- API version 2020-03-01 or later should be used to configure Private Link configurations.
+- Static IP allocation method in the Private Link Configuration object isn't supported.
+- The subnet used for PrivateLinkConfiguration cannot be same as the Application Gateway subnet.
+- Private link configuration for Application Gateway does not expose the "Alias" property and must be referenced via resource URI.
+- Private Endpoint creation does not create a \*.privatelink DNS record/zone. All DNS records should be entered in existing zones used for your Application Gateway.
+- Azure Front Door and Application Gateway do not support chaining via Private Link.
+- Source IP address and x-forwarded-for headers will contain the Private link IP addresses
+
+## Next steps
+
+- [Configure Azure Application Gateway Private Link](private-link-configure.md)
+- [What is Azure Private Link?](../private-link/private-link-overview.md)
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
Title: "Quickstart: Connect an existing Kubernetes cluster to Azure Arc" description: In this quickstart, you learn how to connect an Azure Arc-enabled Kubernetes cluster. Previously updated : 02/16/2022 Last updated : 05/11/2022 ms.devlang: azurecli
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
* A basic understanding of [Kubernetes core concepts](../../aks/concepts-clusters-workloads.md).
+* An identity (user or service principal) which can be used to [log in to Azure CLI](/cli/azure/authenticate-azure-cli) and connect your cluster to Azure Arc.
+
+ > [!IMPORTANT]
+ >
+ > * The identity must have 'Read' and 'Write' permissions on the Azure Arc-enabled Kubernetes resource type (`Microsoft.Kubernetes/connectedClusters`).
+ > * The [Kubernetes Cluster - Azure Arc Onboarding built-in role](../../role-based-access-control/built-in-roles.md#kubernetes-clusterazure-arc-onboarding) can be used for this identity. This role is useful for at-scale onboarding, as it has only the granular permissions required to connect clusters to Azure Arc, and doesn't have permission to update, delete, or modify any other clusters or other Azure resources.
+ * [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0 * Install the **connectedk8s** Azure CLI extension of version >= 1.2.0:
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
az extension add --name connectedk8s ```
-* [Log in to Azure CLI](/cli/azure/authenticate-azure-cli) using the identity (user or service principal) that you want to use for connecting your cluster to Azure Arc.
- * The identity used needs to at least have 'Read' and 'Write' permissions on the Azure Arc-enabled Kubernetes resource type (`Microsoft.Kubernetes/connectedClusters`).
- * The [Kubernetes Cluster - Azure Arc Onboarding built-in role](../../role-based-access-control/built-in-roles.md#kubernetes-clusterazure-arc-onboarding) is useful for at-scale onboarding as it has the granular permissions required to only connect clusters to Azure Arc. This role doesn't have the permissions to update, delete, or modify any other clusters or other Azure resources.
- * An up-and-running Kubernetes cluster. If you don't have one, you can create a cluster using one of these options: * [Kubernetes in Docker (KIND)](https://kind.sigs.k8s.io/) * Create a Kubernetes cluster using Docker for [Mac](https://docs.docker.com/docker-for-mac/#kubernetes) or [Windows](https://docs.docker.com/docker-for-windows/#kubernetes)
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
-## Meet network requirements
-
-> [!IMPORTANT]
-> Azure Arc agents require the following outbound URLs on `https://:443` to function.
-> For `*.servicebus.windows.net`, websockets need to be enabled for outbound access on firewall and proxy.
-
-| Endpoint (DNS) | Description |
-| -- | - |
-| `https://management.azure.com` (for Azure Cloud), `https://management.usgovcloudapi.net` (for Azure US Government) | Required for the agent to connect to Azure and register the cluster. |
-| `https://<region>.dp.kubernetesconfiguration.azure.com` (for Azure Cloud), `https://<region>.dp.kubernetesconfiguration.azure.us` (for Azure US Government) | Data plane endpoint for the agent to push status and fetch configuration information. |
-| `https://login.microsoftonline.com`, `https://<region>.login.microsoft.com`, `login.windows.net` (for Azure Cloud), `https://login.microsoftonline.us`, `<region>.login.microsoftonline.us` (for Azure US Government) | Required to fetch and update Azure Resource Manager tokens. |
-| `https://mcr.microsoft.com`, `https://*.data.mcr.microsoft.com` | Required to pull container images for Azure Arc agents. |
-| `https://gbl.his.arc.azure.com` (for Azure Cloud), `https://gbl.his.arc.azure.us` (for Azure US Government) | Required to get the regional endpoint for pulling system-assigned Managed Identity certificates. |
-| `https://*.his.arc.azure.com` (for Azure Cloud), `https://usgv.his.arc.azure.us` (for Azure US Government) | Required to pull system-assigned Managed Identity certificates. |
-|`*.servicebus.windows.net`, `guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`, `sts.windows.net` | For [Cluster Connect](cluster-connect.md) and for [Custom Location](custom-locations.md) based scenarios. |
-|`https://k8connecthelm.azureedge.net` | `az connectedk8s connect` uses Helm 3 to deploy Azure Arc agents on the Kubernetes cluster. This endpoint is needed for Helm client download to facilitate deployment of the agent helm chart. |
- ## Register providers for Azure Arc-enabled Kubernetes ### [Azure CLI](#tab/azure-cli)
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
+## Meet network requirements
+
+> [!IMPORTANT]
+> Azure Arc agents require the following outbound URLs on `https://:443` to function.
+> For `*.servicebus.windows.net`, websockets need to be enabled for outbound access on firewall and proxy.
+
+| Endpoint (DNS) | Description |
+| -- | - |
+| `https://management.azure.com` (for Azure Cloud), `https://management.usgovcloudapi.net` (for Azure US Government) | Required for the agent to connect to Azure and register the cluster. |
+| `https://<region>.dp.kubernetesconfiguration.azure.com` (for Azure Cloud), `https://<region>.dp.kubernetesconfiguration.azure.us` (for Azure US Government) | Data plane endpoint for the agent to push status and fetch configuration information. |
+| `https://login.microsoftonline.com`, `https://<region>.login.microsoft.com`, `login.windows.net` (for Azure Cloud), `https://login.microsoftonline.us`, `<region>.login.microsoftonline.us` (for Azure US Government) | Required to fetch and update Azure Resource Manager tokens. |
+| `https://mcr.microsoft.com`, `https://*.data.mcr.microsoft.com` | Required to pull container images for Azure Arc agents. |
+| `https://gbl.his.arc.azure.com` (for Azure Cloud), `https://gbl.his.arc.azure.us` (for Azure US Government) | Required to get the regional endpoint for pulling system-assigned Managed Identity certificates. |
+| `https://*.his.arc.azure.com` (for Azure Cloud), `https://usgv.his.arc.azure.us` (for Azure US Government) | Required to pull system-assigned Managed Identity certificates. |
+|`*.servicebus.windows.net`, `guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`, `sts.windows.net` | For [Cluster Connect](cluster-connect.md) and for [Custom Location](custom-locations.md) based scenarios. |
+|`https://k8connecthelm.azureedge.net` | `az connectedk8s connect` uses Helm 3 to deploy Azure Arc agents on the Kubernetes cluster. This endpoint is needed for Helm client download to facilitate deployment of the agent helm chart. |
+ ## Create a resource group Run the following command:
azure-arc Onboard Ansible Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-ansible-playbooks.md
+
+ Title: Connect machines at scale using Ansible Playbooks
+description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using Ansible playbooks.
Last updated : 05/09/2022++++
+# Connect machines at scale using Ansible playbooks
+
+You can onboard Ansible-managed nodes to Azure Arc-enabled servers at scale using Ansible playbooks. To do so, you'll need to download, modify, and then run the appropriate playbook.
+
+Before you get started, be sure to review the [prerequisites](prerequisites.md) and verify that your subscription and resources meet the requirements. For information about supported regions and other related considerations, see [supported Azure regions](overview.md#supported-regions). Also review our [at-scale planning guide](plan-at-scale-deployment.md) to understand the design and deployment criteria, as well as our management and monitoring recommendations.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Generate a service principal and collect Azure details
+
+Before you can run the script to connect your machines, you'll need to do the following:
+
+1. Follow the steps toΓÇ»[create a service principal for onboarding at scale](onboard-service-principal.md#create-a-service-principal-for-onboarding-at-scale).
+
+ * Assign the Azure Connected Machine Onboarding role to your service principal and limit the scope of the role to the target Azure subscription or resource group.
+ * Make a note of the Service Principal Secret and Service Principal Client ID; you'll need these values later.
+
+1. Collect details on the Tenant ID, Subscription ID, Resource Group, and Region where the Azure Arc-enabled resource will be onboarded.
+
+## Download the Ansible playbook
+
+If you are onboarding machines to Azure Arc-enabled servers, copy the following Ansible playbook template and save the playbook as `arc-server-onboard-playbook.yml`.
+
+```
+
+- name: Onboard Linux and Windows Servers to Azure Arc-enabled servers with public endpoint connectivity
+ hosts: <INSERT-HOSTS>
+ tasks:
+ - name: Download the Connected Machine Agent on Linux servers
+ become: yes
+ get_url:
+ url: https://aka.ms/azcmagent
+ dest: ~/install_linux_azcmagent.sh
+ mode: '700'
+ when: ansible_system == 'Linux'
+ - name: Download the Connected Machine Agent on Windows servers
+ win_get_url:
+ url: https://aka.ms/AzureConnectedMachineAgent
+ dest: C:\AzureConnectedMachineAgent.msi
+ when: ansible_os_family == 'Windows'
+ - name: Install the Connected Machine Agent on Linux servers
+ become: yes
+ shell: bash ~/install_linux_azcmagent.sh
+ when: ansible_system == 'Linux'
+ - name: Install the Connected Machine Agent on Windows servers
+ path: C:\AzureConnectedMachineAgent.msi
+ when: ansible_os_family == 'Windows'
+ - name: Connect the Connected Machine Agent on Linux servers to Azure Arc
+ become: yes
+ shell: sudo azcmagent connect --service-principal-id <INSERT-SERVICE-PRINCIPAL-CLIENT-ID> --service-principal-secret <INSERT-SERVICE-PRINCIPAL-SECRET> --resource-group <INSERT-RESOURCE-GROUP> --tenant-id <INSERT-TENANT-ID> --location <INSERT-REGION> --subscription-id <INSERT-SUBSCRIPTION-ID>
+ when: ansible_system == 'Linux'
+ - name: Connect the Connected Machine Agent on Windows servers to Azure
+ win_shell: '& $env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe connect --service-principal-id <INSERT-SERVICE-PRINCIPAL-CLIENT-ID> --service-principal-secret <INSERT-SERVICE-PRINCIPAL-SECRET> --resource-group <INSERT-RESOURCE-GROUP> --tenant-id <INSERT-TENANT-ID> --location <INSERT-REGION> --subscription-id <INSERT-SUBSCRIPTION-ID>'
+ when: ansible_os_family == 'Windows'
+```
+
+<!--If you are onboarding Linux servers to Azure Arc-enabled servers, download the following Ansible playbook template and save the playbook as `arc-server-onboard-playbook.yml`.
+
+```
+
+- name: Onboard Linux Server to Azure Arc-enabled servers with public endpoint
+ hosts: <INSERT-HOSTS>
+ tasks:
+ - name: Download the Connected Machine Agent
+ become: yes
+ get_url:
+ url: https://aka.ms/azcmagent
+ dest: ~/install_linux_azcmagent.sh
+ mode: '700'
+ when: ansible_system == 'Linux'
+ - name: Install the Connected Machine Agent
+ become: yes
+ shell: bash ~/install_linux_azcmagent.sh
+ when: ansible_system == 'Linux'
+ - name: Connect the Connected Machine Agent to Azure
+ become: yes
+ shell: sudo azcmagent connect --service-principal-id <INSERT-SERVICE-PRINCIPAL-CLIENT-ID> --service-principal-secret <INSERT-SERVICE-PRINCIPAL-SECRET> --resource-group <INSERT-RESOURCE-GROUP> --tenant-id <INSERT-TENANT-ID> --location <INSERT-REGION> --subscription-id <INSERT-SUBSCRIPTION-ID>
+ when: ansible_system == 'Linux'
+```-->
+
+## Modify the Ansible playbook
+
+After downloading the Ansible playbook, complete the following steps:
+
+1. Within the Ansible playbook, modify the fields under the task **Connect the Connected Machine Agent to Azure** with the service principal and Azure details collected earlier:
+
+ * Service Principal Id
+ * Service Principal Secret
+ * Resource Group
+ * Tenant Id
+ * Subscription Id
+ * Region
+
+1. Enter the correct hosts field capturing the target servers for onboarding to Azure Arc. You can employ Ansible patterns to selectively target which hybrid machines to onboard.
+
+## Run the Ansible playbook
+
+From the Ansible control node, run the Ansible playbook by invoking the `ansible-playbook` command:
+
+```
+ansible-playbook arc-server-onboard-playbook.yml
+```
+
+After the playbook has run, the **PLAY RECAP** will indicate if all tasks were completed successfully and surface any nodes where tasks failed.
+
+## Verify the connection with Azure Arc
+
+After you have successfully installed the agent and configured it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the servers in your target hosts have successfully connected. View your machines in the [Azure portal](https://aka.ms/hybridmachineportal).
+
+## Next steps
+
+- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.
+- Review connection troubleshooting information in the [Troubleshoot Connected Machine agent guide](troubleshoot-agent-onboard.md).
+- Learn how to manage your machine using [Azure Policy](../../governance/policy/overview.md) for such things as VM [guest configuration](../../governance/policy/concepts/guest-configuration.md), verifying that the machine is reporting to the expected Log Analytics workspace, enabling monitoring with [VM insights](../../azure-monitor/vm/vminsights-enable-policy.md), and much more.
+
azure-arc Onboard Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-group-policy.md
The Group Policy to onboard Azure Arc-enabled servers requires a remote share wi
1. Download the latest version of the [Windows agent Windows Installer package](https://aka.ms/AzureConnectedMachineAgent) from the Microsoft Download Center and save it to the remote share.
-## Generate an onboarding script and configuration file from Azure Portal
+## Generate an onboarding script and configuration file from Azure portal
Before you can run the script to connect your machines, you'll need to do the following:
In the **Actions** tab, select **New**, then enter the follow parameters in the
On the Group Policy Management Console, right-click on the desired Organizational Unit and select the option to link an existent GPO. Choose the Group Policy Object defined in the Scheduled Task. After 10 or 20 minutes, the Group Policy Object will be replicated to the respective domain controllers. Learn more about [creating and managing group policy in Azure AD Domain Services](../../active-directory-domain-services/manage-group-policy.md).
-After you have successfully installed the agent and configure it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the servers in your Organizational Unit have successfully connected. View your machines in the [Azure portal](https://aka.ms/hybridmachineportal).
+After you have successfully installed the agent and configured it to connect to Azure Arc-enabled servers, go to the Azure portal to verify that the servers in your Organizational Unit have successfully connected. View your machines in the [Azure portal](https://aka.ms/hybridmachineportal).
## Next steps
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
Title: Connected Machine agent prerequisites description: Learn about the prerequisites for installing the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 03/21/2022 Last updated : 05/10/2022
The following versions of the Windows and Linux operating system are officially
* Windows Server 2008 R2 SP1, 2012 R2, 2016, 2019, and 2022 * Both Desktop and Server Core experiences are supported * Azure Editions are supported when running as a virtual machine on Azure Stack HCI
+* Windows IoT Enterprise
* Azure Stack HCI * Ubuntu 16.04, 18.04, and 20.04 LTS * CentOS Linux 7 and 8
azure-cache-for-redis Cache Aspnet Session State Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-aspnet-session-state-provider.md
ms.devlang: csharp Previously updated : 05/01/2017 Last updated : 05/06/2022 # ASP.NET Session State Provider for Azure Cache for Redis Azure Cache for Redis provides a session state provider that you can use to store your session state in-memory with Azure Cache for Redis instead of a SQL Server database. To use the caching session state provider, first configure your cache, and then configure your ASP.NET application for cache using the Azure Cache for Redis Session State NuGet package. For ASP.NET Core applications, read [Session and state management in ASP.NET Core](/aspnet/core/fundamentals/app-state).
-It's often not practical in a real-world cloud app to avoid storing some form of state for a user session, but some approaches impact performance and scalability more than others. If you have to store state, the best solution is to keep the amount of state small and store it in cookies. If that isn't feasible, the next best solution is to use ASP.NET session state with a provider for distributed, in-memory cache. The worst solution from a performance and scalability standpoint is to use a database backed session state provider. This topic provides guidance on using the ASP.NET Session State Provider for Azure Cache for Redis. For information on other session state options, see [ASP.NET Session State options](#aspnet-session-state-options).
+It's often not practical in a cloud app to avoid storing some form of state for a user session, but some approaches affect performance and scalability more than others. If you have to store state, the best solution is to keep the amount of state small and store it in cookies. If that isn't feasible, the next best solution is to use ASP.NET session state with a provider for distributed, in-memory cache. The worst solution from a performance and scalability standpoint is to use a database-backed session state provider. This article provides guidance on using the ASP.NET Session State Provider for Azure Cache for Redis. For information on other session state options, see [ASP.NET Session State options](#aspnet-session-state-options).
## Store ASP.NET session state in the cache
Install-Package Microsoft.Web.RedisSessionStateProvider
``` > [!IMPORTANT]
-> If you are using the clustering feature from the premium tier, you must use [RedisSessionStateProvider](https://www.nuget.org/packages/Microsoft.Web.RedisSessionStateProvider) 2.0.1 or higher or an exception is thrown. Moving to 2.0.1 or higher is a breaking change; for more information, see [v2.0.0 Breaking Change Details](https://github.com/Azure/aspnet-redis-providers/wiki/v2.0.0-Breaking-Change-Details). At the time of this article update, the current version of this package is 2.2.3.
+> If you are using the clustering feature from the premium tier, you must use [RedisSessionStateProvider](https://www.nuget.org/packages/Microsoft.Web.RedisSessionStateProvider) 2.0.1 or higher or an exception is thrown. Moving to 2.0.1 or higher is a breaking change.
> >
-The Redis Session State Provider NuGet package has a dependency on the StackExchange.Redis package. If the StackExchange.Redis package is not present in your project, it is installed.
+The Redis Session State Provider NuGet package has a dependency on the StackExchange.Redis package. If the StackExchange.Redis package isn't present in your project, it's installed.
The NuGet package downloads and adds the required assembly references and adds the following section into your web.config file. This section contains the required configuration for your ASP.NET application to use the Azure Cache for Redis Session State Provider.
Configure the attributes with the values on the left from your cache in the Micr
* **port** ΓÇô use either your non-TLS/SSL port or your TLS/SSL port, depending on the TLS settings. * **accessKey** ΓÇô use either the primary or secondary key for your cache. * **ssl** ΓÇô true if you want to secure cache/client communications with TLS; otherwise false. Be sure to specify the correct port.
- * The non-TLS port is disabled by default for new caches. Specify true for this setting to use the TLS port. For more information about enabling the non-TLS port, see the [Access Ports](cache-configure.md#access-ports) section in the [Configure a cache](cache-configure.md) topic.
-* **throwOnError** ΓÇô true if you want an exception to be thrown if there is a failure, or false if you want the operation to fail silently. You can check for a failure by checking the static Microsoft.Web.Redis.RedisSessionStateProvider.LastException property. The default is true.
-* **retryTimeoutInMilliseconds** ΓÇô Operations that fail are retried during this interval, specified in milliseconds. The first retry occurs after 20 milliseconds, and then retries occur every second until the retryTimeoutInMilliseconds interval expires. Immediately after this interval, the operation is retried one final time. If the operation still fails, the exception is thrown back to the caller, depending on the throwOnError setting. The default value is 0, which means no retries.
+ * The non-TLS port is disabled by default for new caches. Specify true for this setting to use the TLS port. For more information about enabling the non-TLS port, see the [Access Ports](cache-configure.md#access-ports) section in the [Configure a cache](cache-configure.md) article.
+* **throwOnError** ΓÇô true if you want an exception to be thrown when there's a failure, or false if you want the operation to fail silently. You can check for a failure by checking the static `Microsoft.Web.Redis.RedisSessionStateProvider.LastException` property. The default is true.
+* **retryTimeoutInMilliseconds** ΓÇô Operations that fail are retried during this interval, specified in milliseconds. The first retry occurs after 20 milliseconds, and then retries occur every second until the `retryTimeoutInMillisecond`s interval expires. Immediately after this interval, the operation is retried one final time. If the operation still fails, the exception is thrown back to the caller, depending on the `throwOnError` setting. The default value is 0, which means no retries.
* **databaseId** ΓÇô Specifies which database to use for cache output data. If not specified, the default value of 0 is used.
-* **applicationName** ΓÇô Keys are stored in redis as `{<Application Name>_<Session ID>}_Data`. This naming scheme enables multiple applications to share the same Redis instance. This parameter is optional and if you do not provide it a default value is used.
-* **connectionTimeoutInMilliseconds** ΓÇô This setting allows you to override the connectTimeout setting in the StackExchange.Redis client. If not specified, the default connectTimeout setting of 5000 is used. For more information, see [StackExchange.Redis configuration model](https://go.microsoft.com/fwlink/?LinkId=398705).
-* **operationTimeoutInMilliseconds** ΓÇô This setting allows you to override the syncTimeout setting in the StackExchange.Redis client. If not specified, the default syncTimeout setting of 1000 is used. For more information, see [StackExchange.Redis configuration model](https://go.microsoft.com/fwlink/?LinkId=398705).
+* **applicationName** ΓÇô Keys are stored in redis as `{<Application Name>_<Session ID>}_Data`. This naming scheme enables multiple applications to share the same Redis instance. This parameter is optional and if you don't provide it a default value is used.
+* **connectionTimeoutInMilliseconds** ΓÇô This setting allows you to override the `connectTimeout` setting in the StackExchange.Redis client. If not specified, the default connectTimeout setting of 5000 is used. For more information, see [StackExchange.Redis configuration model](https://go.microsoft.com/fwlink/?LinkId=398705).
+* **operationTimeoutInMilliseconds** ΓÇô This setting allows you to override the syncTimeout setting in the StackExchange.Redis client. If not specified, the default `syncTimeout` setting of 1000 is used. For more information, see [StackExchange.Redis configuration model](https://go.microsoft.com/fwlink/?LinkId=398705).
* **redisSerializerType** - This setting allows you to specify custom serialization of session content that is sent to Redis. The type specified must implement `Microsoft.Web.Redis.ISerializer` and must declare public parameterless constructor. By default `System.Runtime.Serialization.Formatters.Binary.BinaryFormatter` is used. For more information about these properties, see the original blog post announcement at [Announcing ASP.NET Session State Provider for Redis](https://devblogs.microsoft.com/aspnet/announcing-asp-net-session-state-provider-for-redis-preview-release/).
-DonΓÇÖt forget to comment out the standard InProc session state provider section in your web.config.
+DonΓÇÖt forget to comment out the standard `InProc` session state provider section in your web.config.
```xml <!-- <sessionState mode="InProc"
DonΓÇÖt forget to comment out the standard InProc session state provider section
</sessionState> --> ```
-Once these steps are performed, your application is configured to use the Azure Cache for Redis Session State Provider. When you use session state in your application, it is stored in an Azure Cache for Redis instance.
+Once these steps are performed, your application is configured to use the Azure Cache for Redis Session State Provider. When you use session state in your application, it's stored in an Azure Cache for Redis instance.
> [!IMPORTANT] > Data stored in the cache must be serializable, unlike the data that can be stored in the default in-memory ASP.NET Session State Provider. When the Session State Provider for Redis is used, be sure that the data types that are being stored in session state are serializable.
Once these steps are performed, your application is configured to use the Azure
## ASP.NET Session State options
-* In Memory Session State Provider - This provider stores the Session State in memory. The benefit of using this provider is it is simple and fast. However you cannot scale your Web Apps if you are using in memory provider since it is not distributed.
-* Sql Server Session State Provider - This provider stores the Session State in Sql Server. Use this provider if you want to store the Session state in persistent storage. You can scale your Web App but using Sql Server for Session has a performance impact on your Web App. You can also use this provider with an [In-Memory OLTP configuration](/archive/blogs/sqlserverstorageengine/asp-net-session-state-with-sql-server-in-memory-oltp) to help improve performance.
+* In Memory Session State Provider - This provider stores the Session State in memory. The benefit of using this provider is simplicity and speed. However, you can't scale your Web Apps if you're using in memory provider since it isn't distributed.
+* Sql Server Session State Provider - This provider stores the Session State in Sql Server. Use this provider if you want to store the Session state in persistent storage. You can scale your Web App but using Sql Server for Session has a performance effect on your Web App. You can also use this provider with an [In-Memory OLTP configuration](/archive/blogs/sqlserverstorageengine/asp-net-session-state-with-sql-server-in-memory-oltp) to help improve performance.
* Distributed In Memory Session State Provider such as Azure Cache for Redis Session State Provider - This provider gives you the best of both worlds. Your Web App can have a simple, fast, and scalable Session State Provider. Because this provider stores the Session state in a Cache, your app has to take in consideration all the characteristics associated when talking to a Distributed In Memory Cache, such as transient network failures. For best practices on using Cache, see [Caching guidance](/azure/architecture/best-practices/caching) from Microsoft Patterns & Practices [Azure Cloud Application Design and Implementation Guidance](https://github.com/mspnp/azure-guidance). For more information about session state and other best practices, see [Web Development Best Practices (Building Real-World Cloud Apps with Azure)](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/web-development-best-practices).
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md
You can view and configure the following settings using the **Resource Menu**. T
### Activity log
-Select **Activity log** to view actions done to your cache. You can also use filtering to expand this view to include other resources. For more information on working with audit logs, see [Audit operations with Resource Manager](../azure-monitor/essentials/activity-log.md). For more information on monitoring Azure Cache for Redis events, see [Operations and alerts](cache-how-to-monitor.md#operations-and-alerts).
+Select **Activity log** to view actions done to your cache. You can also use filtering to expand this view to include other resources. For more information on working with audit logs, see [Audit operations with Resource Manager](../azure-monitor/essentials/activity-log.md). For more information on monitoring Azure Cache for Redis events, see [alerts](cache-how-to-monitor.md#alerts).
### Access control (IAM)
azure-cache-for-redis Cache How To Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-geo-replication.md
Title: Configure geo-replication for Premium Azure Cache for Redis instances description: Learn how to replicate your Azure Cache for Redis Premium instances across Azure regions - Last updated 02/08/2021 + # Configure geo-replication for Premium Azure Cache for Redis instances In this article, you'll learn how to configure a geo-replicated Azure Cache using the Azure portal.
-Geo-replication links together two Premium Azure Cache for Redis instances and creates a data replication relationship. These cache instances are usually located in different Azure regions, though they aren't required to. One instance acts as the primary, and the other as the secondary. The primary handles read and write requests and propagates changes to the secondary. This process continues until the link between the two instances is removed.
+Geo-replication links together two Premium Azure Cache for Redis instances and creates a data replication relationship. These cache instances are typically located in different Azure regions, though that isn't required. One instance acts as the primary, and the other as the secondary. The primary handles read and write requests and propagate changes to the secondary. This process continues until the link between the two instances is removed.
> [!NOTE] > Geo-replication is designed as a disaster-recovery solution.
Some features aren't supported with geo-replication:
After geo-replication is configured, the following restrictions apply to your linked cache pair: -- The secondary linked cache is read-only; you can read from it, but you can't write any data to it. If you choose to read from the Geo-Secondary instance, it is important to note that whenever a full data sync is happening between the Geo-Primary and the Geo-Secondary (happens when either Geo-Primary or Geo-Secondary is updated and on some reboot scenarios as well), the Geo-Secondary instance will throw errors (stating that a full data sync is in progress) on any Redis operation against it until the full data sync between Geo-Primary and Geo-Secondary is complete. Applications reading from Geo-Secondary should be built to fall back to the Geo-Primary whenever the Geo-Secondary is throwing such errors.
+- The secondary linked cache is read-only; you can read from it, but you can't write any data to it. If you choose to read from the Geo-Secondary instance when a full data sync is happening between the Geo-Primary and the Geo-Secondary, the Geo-Secondary instance throws errors on any Redis operation against it until the full data sync is complete. The errors state that a full data sync is in progress. Also, the errors are thrown when either Geo-Primary or Geo-Secondary is updated and on some reboot scenarios. Applications reading from Geo-Secondary should be built to fall back to the Geo-Primary whenever the Geo-Secondary is throwing such errors.
- Any data that was in the secondary linked cache before the link was added is removed. If the geo-replication is later removed however, the replicated data remains in the secondary linked cache. - You can't [scale](cache-how-to-scale.md) either cache while the caches are linked. - You can't [change the number of shards](cache-how-to-premium-clustering.md) if the cache has clustering enabled.
After geo-replication is configured, the following restrictions apply to your li
## Add a geo-replication link
-1. To link two caches together for geo-replication, fist click **Geo-replication** from the Resource menu of the cache that you intend to be the primary linked cache. Next, click **Add cache replication link** from **Geo-replication** on the left.
+1. To link two caches together for geo-replication, fist select **Geo-replication** from the Resource menu of the cache that you intend to be the primary linked cache. Next, select **Add cache replication link** from **Geo-replication** on the left.
:::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-menu.png" alt-text="Cache geo-replication menu":::
After geo-replication is configured, the following restrictions apply to your li
## Remove a geo-replication link
-1. To remove the link between two caches and stop geo-replication, click **Unlink caches** from the **Geo-replication** on the left .
+1. To remove the link between two caches and stop geo-replication, select **Unlink caches** from the **Geo-replication** on the left.
:::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-unlink.png" alt-text="Unlink caches":::
No, geo-replication is only available for Premium tier caches.
### Is my cache available for use during the linking or unlinking process? -- When linking, the primary linked cache remains available while the linking process completes.-- When linking, the secondary linked cache isn't available until the linking process completes.-- When unlinking, both caches remain available while the unlinking process completes.
+- The primary linked cache remains available until the linking process completes.
+- The secondary linked cache isn't available until the linking process completes.
+- Both caches remain available until the unlinking process completes.
### Can I link more than two caches together?
Yes, geo-replication of caches in VNets is supported with caveats:
- Geo-replication between caches in the same VNet is supported. - Geo-replication between caches in different VNets is also supported. - If the VNets are in the same region, you can connect them using [VNet peering](../virtual-network/virtual-network-peering-overview.md) or a [VPN Gateway VNet-to-VNet connection](../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md).
- - If the VNets are in different regions, geo-replication using VNet peering is supported, but a client VM in VNet 1 (region 1) is not able to access the cache in VNet 2 (region 2) using it's DNS name because of a constraint with Basic internal load balancers. For more information about VNet peering constraints, see [Virtual Network - Peering - Requirements and constraints](../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). We recommend to use a VPN Gateway VNet-to-VNet connection.
+ - If the VNets are in different regions, geo-replication using VNet peering is supported. A client VM in VNet 1 (region 1) isn't able to access the cache in VNet 2 (region 2) using its DNS name because of a constraint with Basic internal load balancers. For more information about VNet peering constraints, see [Virtual Network - Peering - Requirements and constraints](../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). We recommend using a VPN Gateway VNet-to-VNet connection.
Using [this Azure template](https://azure.microsoft.com/resources/templates/redis-vnet-geo-replication/), you can quickly deploy two geo-replicated caches into a VNet connected with a VPN Gateway VNet-to-VNet connection.
Replication is continuous and asynchronous. It doesn't happen on a specific sche
### How long does geo-replication replication take?
-Replication is incremental, asynchronous, and continuous and the time taken isn't much different from the latency across regions. Under certain circumstances, the secondary cache can be required to do a full sync of the data from the primary. The replication time in this case is depends on a number of factors like: load on the primary cache, available network bandwidth, and inter-region latency. We have found replication time for a full 53-GB geo-replicated pair can be anywhere between 5 to 10 minutes.
+Replication is incremental, asynchronous, and continuous and the time taken isn't much different from the latency across regions. Under certain circumstances, the secondary cache can be required to do a full sync of the data from the primary. The replication time in this case depends on many factors like: load on the primary cache, available network bandwidth, and inter-region latency. We have found replication time for a full 53-GB geo-replicated pair can be anywhere between 5 to 10 minutes.
### Is the replication recovery point guaranteed?
Yes, geo-replication can be managed using the Azure portal, PowerShell, or Azure
### How much does it cost to replicate my data across Azure regions?
-When using geo-replication, data from the primary linked cache is replicated to the secondary linked cache. There's no charge for the data transfer if the two linked caches are in the same region. If the two linked caches are in different regions, the data transfer charge is the network egress cost of data moving across either region. For more information, see [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/).
+When you use geo-replication, data from the primary linked cache is replicated to the secondary linked cache. There's no charge for the data transfer if the two linked caches are in the same region. If the two linked caches are in different regions, the data transfer charge is the network egress cost of data moving across either region. For more information, see [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/).
### Why did the operation fail when I tried to delete my linked cache?
In general, it's recommended for your cache to exist in the same Azure region as
### How does failing over to the secondary linked cache work?
-Automatic failover across Azure regions isn't supported for geo-replicated caches. In a disaster-recovery scenario, customers should bring up the entire application stack in a coordinated manner in their backup region. Letting individual application components decide when to switch to their backups on their own can negatively impact performance.
+Automatic failover across Azure regions isn't supported for geo-replicated caches. In a disaster-recovery scenario, customers should bring up the entire application stack in a coordinated manner in their backup region. Letting individual application components decide when to switch to their backups on their own can negatively affect performance.
-One of the key benefits of Redis is that it's a very low-latency store. If the customer's main application is in a different region than its cache, the added round-trip time would have a noticeable impact on performance. For this reason, we avoid failing over automatically because of transient availability issues.
+One of the key benefits of Redis is that it's a very low-latency store. If the customer's main application is in a different region than its cache, the added round-trip time would have a noticeable effect on performance. For this reason, we avoid failing over automatically because of transient availability issues.
To start a customer-initiated failover, first unlink the caches. Then, change your Redis client to use the connection endpoint of the (formerly linked) secondary cache. When the two caches are unlinked, the secondary cache becomes a regular read-write cache again and accepts requests directly from Redis clients.
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md
Previously updated : 02/08/2021 Last updated : 05/06/2022+ # Monitor Azure Cache for Redis
Use Azure Monitor to:
- add and remove metrics from the charts - and set alerts when certain conditions are met
-Metrics for Azure Cache for Redis instances are collected using the Redis [INFO](https://redis.io/commands/info) command. Metrics are collected approximately twice per minute and automatically stored for 30 days so they can be displayed in the metrics charts and evaluated by alert rules.
+Metrics for Azure Cache for Redis instances are collected using the Redis [INFO](https://redis.io/commands/info) command. Metrics are collected approximately two times per minute and automatically stored for 30 days so they can be displayed in the metrics charts and evaluated by alert rules.
To configure a different retention policy, see [Export cache metrics](#export-cache-metrics). For more information about the different INFO values used for each cache metric, see [Available metrics and reporting intervals](#available-metrics-and-reporting-intervals).
-<a name="view-cache-metrics"></a>
+## View cache metrics
To view cache metrics, [browse](cache-configure.md#configure-azure-cache-for-redis-settings) to your cache instance in the [Azure portal](https://portal.azure.com). Azure Cache for Redis provides some built-in charts on the left using **Overview** and **Redis metrics**. Each chart can be customized by adding or removing metrics and changing the reporting interval.
-![Six graphs are shown. One of them is Cache Hits and Cache Misses past hour.](./media/cache-how-to-monitor/redis-cache-redis-metrics-blade.png)
## View pre-configured metrics charts
On the left, **Overview** has the following pre-configured monitoring charts.
The **Monitoring** sectionin **Overview** on the lefthas **Hits and Misses**, **Gets and Sets**, **Connections**, and **Total Commands** charts.
-![Monitoring charts](./media/cache-how-to-monitor/redis-cache-monitoring-part.png)
### Usage charts The **Usage** sectionin **Overview** on the lefthas **Redis Server Load**, **Memory Usage**, **Network Bandwidth**, and **CPU Usage** charts, and also displays the **Pricing tier** for the cache instance.
-![Usage charts](./media/cache-how-to-monitor/redis-cache-usage-part.png)
The **Pricing tier** displays the cache pricing tier, and can be used to [scale](cache-how-to-scale.md) the cache to a different pricing tier.
Use [Azure Monitor for Azure Cache for Redis](../azure-monitor/insights/redis-ca
For scenarios where you don't need the full flexibility of Azure Monitor for Azure Cache for Redis, you can instead view metrics and create custom charts using the Azure Monitor metrics explorer. Select **Metrics** from the **Resource menu**, and customize your chart using your preferred metrics, reporting interval, chart type, and more.
-![In the left navigation pane of contoso55, Metrics is an option under Monitoring and is highlighted. On Metrics there is a list of metrics. Cache hits and Cache misses are selected.](./media/cache-how-to-monitor/redis-cache-monitor.png)
+In the left navigation pane of contoso55, Metrics is an option under Monitoring and is highlighted. On Metrics, is a list of metrics. Cache hits and Cache misses are selected.
-For more information on working with metrics using Azure Monitor, see [Overview of metrics in Microsoft Azure](../azure-monitor/data-platform.md).
-<a name="enable-cache-diagnostics"></a>
+For more information on working with metrics using Azure Monitor, see [Overview of metrics in Microsoft Azure](../azure-monitor/data-platform.md).
## Export cache metrics
To configure a storage account for your cache metrics:
1. Under the table heading **metric**, check box beside the line items you want to store, such as **AllMetrics**. Specify a **Retention (days)** policy. The maximum days retention you can specify is **365 days**. However, if you want to keep the metrics data forever, set **Retention (days)** to **0**. 1. Select **Save**.
-![Redis diagnostics](./media/cache-how-to-monitor/redis-cache-diagnostics.png)
>[!NOTE] >In addition to archiving your cache metrics to storage, you can also [stream them to an Event hub or send them to Azure Monitor logs](../azure-monitor/essentials/rest-api-walkthrough.md#retrieve-metric-values).
Each metric includes two versions. One metric measures performance for the entir
| Used Memory Percentage | The % of total memory that is being used during the specified reporting interval. This value references the `used_memory` value from the Redis INFO command to calculate the percentage. | | Used Memory RSS |The amount of cache memory used in MB during the specified reporting interval, including fragmentation and metadata. This value maps to `used_memory_rss` from the Redis INFO command. |
-<a name="operations-and-alerts"></a>
- ## Alerts You can configure to receive alerts based on metrics and activity logs. Azure Monitor allows you to configure an alert to do the following when it triggers:
You can configure to receive alerts based on metrics and activity logs. Azure Mo
To configure Alert rules for your cache, select **Alert rules** from the **Resource menu**.
-![Monitoring](./media/cache-how-to-monitor/redis-cache-monitoring.png)
For more information about configuring and using Alerts, see [Overview of Alerts](../azure-monitor/alerts/alerts-classic-portal.md).-
-<!--
-## Activity Logs
-
-Activity logs provide insight into the operations that completed on your Azure Cache for Redis instances. It was previously known as "audit logs" or "operational logs". Using activity logs, you can determine the "what, who, and when" for any write operations (PUT, POST, DELETE) taken on your Azure Cache for Redis instances.
-
-> [!NOTE]
-> Activity logs do not include read (GET) operations.
->
-
-To view activity logs for your cache, select **Activity logs** from the **Resource menu**.
-
-For more information about Activity logs, see [Overview of the Azure Activity Log](../azure-monitor/essentials/platform-logs-overview.md).
>
azure-cache-for-redis Cache How To Premium Clustering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-clustering.md
Clustering is enabled **New Azure Cache for Redis** on the left during cache cr
:::image type="content" source="media/cache-how-to-premium-clustering/redis-cache-clustering-selected.png" alt-text="Clustering toggle selected.":::
- Once the cache is created, you connect to it and use it just like a non-clustered cache. Redis distributes the data throughout the Cache shards. If diagnostics is [enabled](cache-how-to-monitor.md#enable-cache-diagnostics), metrics are captured separately for each shard and can be [viewed](cache-how-to-monitor.md) in Azure Cache for Redis on the left.
+ Once the cache is created, you connect to it and use it just like a non-clustered cache. Redis distributes the data throughout the Cache shards. If diagnostics is [enabled](cache-how-to-monitor.md#export-cache-metrics), metrics are captured separately for each shard and can be [viewed](cache-how-to-monitor.md) in Azure Cache for Redis on the left.
1. Select the **Next: Tags** tab or select the **Next: Tags** button at the bottom of the page.
azure-cache-for-redis Cache How To Premium Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-vnet.md
Title: Configure a virtual network - Premium-tier Azure Cache for Redis instance description: Learn how to create and manage virtual network support for your Premium-tier Azure Cache for Redis instance - Previously updated : 02/08/2021 Last updated : 05/06/2022+ # Configure virtual network support for a Premium Azure Cache for Redis instance
When your cache is part of a virtual network, only clients in the virtual networ
Customers can connect an [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/) circuit to their virtual network infrastructure. In this way, they extend their on-premises network to Azure.
-By default, a newly created ExpressRoute circuit doesn't do forced tunneling (advertisement of a default route, 0.0.0.0/0) on a virtual network. As a result, outbound internet connectivity is allowed directly from the virtual network. Client applications can connect to other Azure endpoints, which include an Azure Cache for Redis instance.
+By default, a newly created ExpressRoute circuit doesn't use forced tunneling (advertisement of a default route, 0.0.0.0/0) on a virtual network. As a result, outbound internet connectivity is allowed directly from the virtual network. Client applications can connect to other Azure endpoints, which include an Azure Cache for Redis instance.
A common customer configuration is to use forced tunneling (advertise a default route), which forces outbound internet traffic to instead flow on-premises. This traffic flow breaks connectivity with Azure Cache for Redis if the outbound traffic is then blocked on-premises such that the Azure Cache for Redis instance isn't able to communicate with its dependencies.
azure-cache-for-redis Quickstart Create Redis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/quickstart-create-redis.md
Previously updated : 02/08/2021
-#Customer intent: As a developer new to Azure Cache for Redis, I want to create an instance of Azure Cache for Redis Enterprise tier.
Last updated : 05/06/2022+ # Quickstart: Create an open-source Redis cache
You'll need an Azure subscription before you begin. If you don't have one, creat
In this quickstart, you learned how to create an instance of Azure Cache for Redis.
-> [!div class="nextstepaction"]
> [Create an ASP.NET web app that uses an Azure Cache for Redis.](./cache-web-app-howto.md)
azure-functions Functions Compare Logic Apps Ms Flow Webjobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md
All of these services can solve integration problems and automate business proce
If you're looking for a more general comparison between Azure Functions and other Azure compute options, see [Criteria for choosing an Azure compute service](/azure/architecture/guide/technology-choices/compute-comparison) and [Choosing an Azure compute option for microservices](/azure/architecture/microservices/design/compute-options).
+For a good summary and comparison of automation service options in Azure, see [Choose the Automation services in Azure](../automation/automation-services.md).
+ ## Compare Microsoft Power Automate and Azure Logic Apps Power Automate and Logic Apps are both *designer-first* integration services that can create workflows. Both services integrate with various SaaS and enterprise applications.
azure-maps Azure Maps Qps Rate Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-qps-rate-limits.md
Below are the QPS usage limits for each Azure Maps service by Pricing Tier.
| Copyright Service | 10 | 10 | 10 | | Creator - Alias, TilesetDetails | 10 | Not Available | Not Available | | Creator - Conversion, Dataset, Feature State, WFS | 50 | Not Available | Not Available |
-| Data Service | 50 | 50 | 50 |
-| Elevation Service | 50 | 50 | 50 |
+| Data Service | 50 | 50 | Not Available |
+| Elevation Service | 50 | 50 | Not Available |
| Geolocation Service | 50 | 50 | 50 |
-| Render Service - Contour tiles, DEM tiles, Elevation tiles, Customer tiles, Traffic tiles and Static maps | 50 | 50 | 50 |
+| Render Service - Contour tiles, Digital Elevation Model (DEM) tiles and Customer tiles | 50 | 50 | Not Available |
+| Render Service - Traffic tiles and Static maps | 50 | 50 | 50 |
| Render Service - Road tiles | 500 | 500 | 50 |
-| Render Service - Satellite tiles | 250 | 250 | 50 |
+| Render Service - Satellite tiles | 250 | 250 | Not Available |
| Render Service - Weather tiles | 100 | 100 | 50 |
-| Route Service - Batch | 10 | 10 | 10 |
+| Route Service - Batch | 10 | 10 | Not Available |
| Route Service - Non-Batch | 50 | 50 | 50 |
-| Search Service - Batch | 10 | 10 | 10 |
+| Search Service - Batch | 10 | 10 | Not Available |
| Search Service - Non-Batch | 500 | 500 | 50 | | Search Service - Non-Batch Reverse | 250 | 250 | 50 |
-| Spatial Service | 50 | 50 | 50 |
+| Spatial Service | 50 | 50 | Not Available |
| Timezone Service | 50 | 50 | 50 | | Traffic Service | 50 | 50 | 50 | | Weather Service | 50 | 50 | 50 |
azure-monitor Alerts Log Api Switch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-api-switch.md
armclient PUT /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>
You can also use [Azure CLI](/cli/azure/reference-index#az-rest) tool: ```bash
-az rest --method post --url /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview --body '{"scheduledQueryRulesEnabled": true}'
+az rest --method put --url /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName>/alertsversion?api-version=2017-04-26-preview --body '{"scheduledQueryRulesEnabled": true}'
``` If the switch is successful, the response is:
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
appInsights.start();
### Automatic web snippet injection (Preview)
-Automatic web snippet injection allows you to enable [Application Insights Usage Experiences](usage-overview.md) and Browser Diagnostic Experiences with a simple configuration. It provides an easier alternative to manually adding the JavaScript snippet or NPM package to your JavaScript web code. For node server with configuration, set `enableAutoWebSnippetInjection` to `true` or alternatively set environment variable `APPLICATIONINSIGHTS_WEB_SNIPPET_ENABLED = true`. Automatic web snippet injection is available in Application Insights Node.js SDK version 2.3.0 or greater. See [Application Insights Node.js Github Readme](https://github.com/microsoft/ApplicationInsights-node.js#automatic-web-snippet-injectionpreview) for more information.
+Automatic web snippet injection allows you to enable [Application Insights Usage Experiences](usage-overview.md) and Browser Diagnostic Experiences with a simple configuration. It provides an easier alternative to manually adding the JavaScript snippet or NPM package to your JavaScript web code. For node server with configuration, set `enableAutoWebSnippetInjection` to `true` or alternatively set environment variable `APPLICATIONINSIGHTS_WEB_SNIPPET_ENABLED = true`. Automatic web snippet injection is available in Application Insights Node.js SDK version 2.3.0 or greater. See [Application Insights Node.js GitHub Readme](https://github.com/microsoft/ApplicationInsights-node.js#automatic-web-snippet-injectionpreview) for more information.
### Automatic third-party instrumentation
azure-monitor Change Analysis Outages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-outages.md
+
+ Title: Track a web app outage using Change Analysis
+description: Learn how to identify the root cause of a web app outage using Azure Monitor Change Analysis.
+++
+ms.contributor: cawa
+ Last updated : 05/09/2022++++
+# Track a web app outage using Change Analysis
+
+When issues happen, one of the first things to check is what changed in application, configuration and resources to triage and root cause issues. Change Analysis provides a centralized view of the changes in your subscriptions for up to the past 14 days to provide the history of changes for troubleshooting issues.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Enable Change Analysis to track changes for Azure resources and for Azure Web App configurations
+> * Troubleshoot a Web App issue using Change Analysis
+
+## Pre-requisites
+
+An Azure Web App with a Storage account dependency. Follow instructions at [ChangeAnalysis-webapp-storage-sample](https://github.com/Azure-Samples/changeanalysis-webapp-storage-sample) if you haven't already deployed one.
+
+## Enable Change Analysis
+
+In the Azure portal, navigate to theChange Analysis service home page.
+
+If this is your first time using Change Analysis service, the page may take up to a few minutes to register the `Microsoft.ChangeAnalysis` resource provider in your selected subscriptions.
++
+Once the Change Analysis page loads, you can see resource changes in your subscriptions. To view detailed web app in-guest change data:
+
+- Select **Enable now** from the banner, or
+- Select **Configure** from the top menu.
+
+In the web app in-guest enablement pane, select the web app you'd like to enable:
++
+Now Change Analysis is fully enabled to track both resources and web app in-guest changes.
+
+## Simulate a web app outage
+
+In a typical team environment, multiple developers can work on the same application without notifying the other developers. Simulate this scenario and make a change to the web app setting:
+
+```azurecli
+az webapp config appsettings set -g {resourcegroup_name} -n {webapp_name} --settings AzureStorageConnection=WRONG_CONNECTION_STRING
+```
+
+Visit the web app URL to view the following error:
++
+## Troubleshoot the outage using Change Analysis
+
+In the Azure portal, navigate to the Change Analysis overview page. Since you've triggered a web app outage, you'll see an entry of change for `AzureStorageConnection`:
++
+Since the connection string is a secret value, we hide this on the overview page for security purposes. With sufficient permission to read the web app, you can select the change to view details around the old and new values:
++
+The change details blade also shows important information, including who made the change.
+
+Now that you've discovered the web app in-guest change and understand next steps, you can proceed with troubleshooting the issue.
+
+## Next steps
+
+Learn more about [Change Analysis](./change-analysis.md).
azure-monitor Ingestion Time Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/ingestion-time-transformations.md
Last updated 01/19/2022
-# Tutorial: Ingestion-time transformations in Azure Monitor Logs (preview)
-[Ingestion-time transformations](ingestion-time-transformations.md) allow you to manipulate incoming data before it's stored in a Log Analytics workspace. You can add data filtering, parsing and extraction, and control the structure of the data that gets ingested.in
+# Ingestion-time transformations in Azure Monitor Logs (preview)
+[Ingestion-time transformations](ingestion-time-transformations.md) allow you to manipulate incoming data before it's stored in a Log Analytics workspace. You can add data filtering, parsing and extraction, and control the structure of the data that gets ingested.
[!INCLUDE [Sign up for preview](../../../includes/azure-monitor-custom-logs-signup.md)]
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
Log Analytics workspace data export continuously exports data that is sent to yo
## Limitations - All tables will be supported in export, but currently limited to those specified in the [supported tables](#supported-tables) section.-- The legacy custom log wonΓÇÖt be supported in export. The next generation of custom log available in preview early 2022 can be exported.
+- Legacy custom log using the [HTTP Data Collector API](./data-collector-api.md) wonΓÇÖt be supported in export, while data for [DCR based custom logs](./custom-logs-overview.md) can be exported.
- You can define up to 10 enabled rules in your workspace. More rules are allowed when disabled. - Destinations must be in the same region as the Log Analytics workspace. - Storage Account must be unique across rules in workspace.
azure-monitor Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/restore.md
Set the query time range by either:
```kusto let startTime =datetime(01/01/2022 8:00:00 PM); let endTime =datetime(01/05/2022 8:00:00 PM);
- TabelName_RST
+ TableName_RST
| where TimeGenerated between(startTime .. endTime) ```
For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pr
## Next steps - [Learn more about data retention and archiving data.](data-retention-archive.md)-- [Learn about Search jobs, which is another method for retrieving archived data.](search-jobs.md)
+- [Learn about Search jobs, which is another method for retrieving archived data.](search-jobs.md)
azure-monitor Tutorial Ingestion Time Transformations Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-ingestion-time-transformations-api.md
Last updated 02/20/2022
# Tutorial: Add ingestion-time transformation to Azure Monitor Logs using resource manager templates (preview) [Ingestion-time transformations](ingestion-time-transformations.md) allow you to manipulate incoming data before it's stored in a Log Analytics workspace. You can add data filtering, parsing and extraction, and control the structure of the data that gets ingested. This tutorial walks you through configuration of a sample ingestion time transformation using resource manager templates. ++ > [!NOTE] > This tutorial uses resource manager templates and REST API to configure an ingestion-time transformation. See [Tutorial: Add ingestion-time transformation to Azure Monitor Logs using the Azure portal (preview)](tutorial-ingestion-time-transformations.md) for the same tutorial using the Azure portal.
azure-monitor Tutorial Ingestion Time Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-ingestion-time-transformations.md
Last updated 02/20/2022
# Add ingestion-time transformation to Azure Monitor Logs using the Azure portal (preview) [Ingestion-time transformations](ingestion-time-transformations.md) allow you to manipulate incoming data before it's stored in a Log Analytics workspace. You can add data filtering, parsing and extraction, and control the structure of the data that gets ingested. This tutorial walks you through configuration of a sample ingestion time transformation using the Azure portal. + > [!NOTE] > This tutorial uses the Azure portal to configure an ingestion-time transformation. See [Tutorial: Add ingestion-time transformation to Azure Monitor Logs using resource manager templates (preview)](tutorial-ingestion-time-transformations-api.md) for the same tutorial using resource manager templates and REST API.
azure-portal Recover Shared Deleted Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/recover-shared-deleted-dashboard.md
Title: Recover a deleted dashboard in the Azure portal description: If you delete a published dashboard in the Azure portal, you can recover the dashboard. Previously updated : 03/25/2021 Last updated : 05/10/2022 # Recover a deleted dashboard in the Azure portal
-If you're in the global Azure cloud, and you delete a _published_ dashboard in the Azure portal, you can recover that dashboard within 14 days of the delete. If you're in an Azure Government cloud or the dashboard isn't published, you cannot recover it, and you must rebuild it. For more information about publishing a dashboard, see [Publish dashboard](azure-portal-dashboard-share-access.md#publish-a-dashboard). Follow these steps to recover a published dashboard:
+If you're in the global Azure cloud, and you delete a _published_ dashboard in the Azure portal, you can recover that dashboard within 7 days of the delete. If you're in an Azure Government cloud or the dashboard isn't published, you cannot recover it, and you must rebuild it. For more information about publishing a dashboard, see [Publish dashboard](azure-portal-dashboard-share-access.md#publish-a-dashboard). Follow these steps to recover a published dashboard:
1. From the Azure portal menu, select **Resource groups**, then select the resource group where you published the dashboard (by default, it's named **dashboards**).
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
The identifier is returned in the following format:
You use this function to get the resource ID for resources that are [deployed to the management group](deploy-to-management-group.md) rather than a resource group. The returned ID differs from the value returned by the [resourceId](#resourceid) function by not including a subscription ID and a resource group value.
-### managementGrouopResourceID example
+### managementGroupResourceID example
The following template creates a policy definition, and assign the policy defintion. It uses the `managementGroupResourceId` function to get the resource ID for policy definition.
azure-resource-manager Conditional Resource Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/conditional-resource-deployment.md
resource vmName_omsOnboarding 'Microsoft.Compute/virtualMachines/extensions@2017
typeHandlerVersion: '1.0' autoUpgradeMinorVersion: true settings: {
- workspaceId: ((!empty(logAnalytics)) ? reference(logAnalytics, '2015-11-01-preview').customerId : json('null'))
+ workspaceId: ((!empty(logAnalytics)) ? reference(logAnalytics, '2015-11-01-preview').customerId : null)
} protectedSettings: {
- workspaceKey: ((!empty(logAnalytics)) ? listKeys(logAnalytics, '2015-11-01-preview').primarySharedKey : json('null'))
+ workspaceKey: ((!empty(logAnalytics)) ? listKeys(logAnalytics, '2015-11-01-preview').primarySharedKey : null)
} } }
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
The following table details the features and limits of the Basic, Standard, and
### Media Services v2 (legacy)
-For limits specific to Media Services v2 (legacy), see [Media Services v2 (legacy)](/media-services/previous/media-services-quotas-and-limitations)
+For limits specific to Media Services v2 (legacy), see [Media Services v2 (legacy)](/azure/media-services/previous/media-services-quotas-and-limitations)
## Mobile Services limits
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Jump to a resource provider namespace:
> | networkintentpolicies | Yes | Yes | No | > | networkinterfaces | Yes | Yes | Yes <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move NICs. | > | networkprofiles | No | No | No |
-> | networksecuritygroups | Yes | Yes | Yes <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move network security groups (NGSs). |
+> | networksecuritygroups | Yes | Yes | Yes <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move network security groups (NSGs). |
> | networkwatchers | No | No | No | > | networkwatchers / connectionmonitors | Yes | No | No | > | networkwatchers / flowlogs | Yes | No | No |
azure-sql-edge Date Bucket Tsql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/date-bucket-tsql.md
The part of *date* that is used with the ΓÇÿnumberΓÇÖ parameter. Ex. Year, month
*number*
-The integer number that decides the width of the bucket combined with *datepart* argument. This represents the width of the datepart buckets from the origin time. **`This argument cannot be a negative integer value`**.
+The *integer* number that decides the width of the bucket combined with *datepart* argument. This represents the width of the datepart buckets from the origin time. **`This argument has to be a positive integer value`**.
*date*
azure-video-indexer Connect Classic Account To Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-classic-account-to-arm.md
Title: Connect a classic Azure Video Indexer account to ARM
description: This topic explains how to connect an existing classic paid Azure Video Indexer account to an ARM-based account Previously updated : 10/19/2021 Last updated : 05/04/2022
In this article, we will go through options on connecting your **existing** Azur
## Prerequisites
-* Unlimited paid Azure Video Indexer account (classic account).
+1. Unlimited paid Azure Video Indexer account (classic account).
- * To perform the connect to the ARM (Azure Resource Manager) action, you should have owner's permissions on the Azure Video Indexer account.
-* Azure Subscription.
-* User assigned managed identity (can be created along the flow).
+ 1. To perform the connect to the ARM (Azure Resource Manager) action, you should have owner's permissions on the Azure Video Indexer classic account.
+1. Azure Subscription with Owner permissions or Contributor with Administrator Role assignment.
+
+ 1. Same level of permission for the Azure Media Service associated with the existing Azure Video Indexer Classic account.
+1. User assigned managed identity (can be created along the flow).
## Transition state
However, this can be resolved by assigning the right role-assignment to these us
Only the account owner, who performed the connect action, is automatically assigned as the owner on the connected account. When [Azure policies][docs-governance-policy] are enforced, they override the settings on the account. If users are not added through Azure RBAC to the account after 30 days, they will lose access through API as well as Azure Video Indexer portal.
-After the transition state ends, users will only be able to generate a valid access token through through ARM, making Azure RBAC the exclusive way to manage role-based access control on the account.
+After the transition state ends, users will only be able to generate a valid access token through ARM, making Azure RBAC the exclusive way to manage role-based access control on the account.
> [!NOTE] > If there are invited users you wish to remove access from, do it before connecting the account to ARM.
Before the end of the 30 days of transition state, you can remove access from us
1. Select the Azure Video Indexer classic account you wish to connect to ARM (classic accounts will be tagged with a *classic tag*). 1. Click **Settings**.
- ![account-settings](media/connect-classic-account-to-arm/user-account-settings.png)
+ :::image type="content" alt-text="Screenshot that shows the Azure Video Indexer portal settings." source="./media/connect-classic-account-to-arm/classic-account-settings.png":::
1. Click **Connect to an ARM-based account**.
- ![connect-button-portal](media/connect-classic-account-to-arm/connect-button.png)
+ :::image type="content" alt-text="Screenshot that shows the connect to an ARM-based account dialog." source="./media/connect-classic-account-to-arm/connect-classic-to-arm.png":::
1. Sign to Azure portal. 1. The Azure Video Indexer create blade will open. 1. In the **Create Azure Video Indexer account** section enter required values.
- * If you followed the steps the fields should be auto-populated, make sure to validate the eligible values.
+ If you followed the steps the fields should be auto-populated, make sure to validate the eligible values.
- ![connect-to-arm](media/connect-classic-account-to-arm/connect-blade-new.png)
+ :::image type="content" alt-text="Screenshot that shows the create Azure Video Indexer account dialog." source="./media/connect-classic-account-to-arm/connect-blade.png":::
+
+ Here are the descriptions for the resource fields:
| Name | Description | | ||
azure-video-indexer Connect To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/connect-to-azure.md
Title: Create an Azure Video Indexer (formerly Azure Video Analyzer for Media) account connected to Azure
+ Title: Create an Azure Video Indexer account connected to Azure
description: Learn how to create an Azure Video Indexer (formerly Azure Video Analyzer for Media) account connected to Azure. Previously updated : 10/19/2021 Last updated : 05/03/2022 # Create an Azure Video Indexer account
-When creating an Azure Video Indexer (formerly Azure Video Analyzer for Media) account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you're not limited by the quota). With a free trial, Azure Video Indexer provides up to 600 minutes of free indexing to users and up to 2400 minutes of free indexing to users that subscribe to the Video Analyzer API on the [developer portal](https://aka.ms/avam-dev-portal). With the paid options, Azure Video Indexer offers two types of accounts: classic accounts(General Availability), and ARM-based accounts(Public Preview). Main difference between the two is account management platform. While classic accounts are built on the API Management, ARM-based accounts management is built on Azure, enables to apply access control to all services with role-based access control (Azure RBAC) natively.
+When creating an Azure Video Indexer (formerly Azure Video Analyzer for Media) account, you can choose a free trial account (where you get a certain number of free indexing minutes) or a paid option (where you're not limited by the quota). With a free trial, Azure Video Indexer provides up to 600 minutes of free indexing to users and up to 2400 minutes of free indexing to users that subscribe to the Azure Video Indexer API on the [developer portal](https://aka.ms/avam-dev-portal). With the paid options, Azure Video Indexer offers two types of accounts: classic accounts(General Availability), and ARM-based accounts(Public Preview). Main difference between the two is account management platform. While classic accounts are built on the API Management, ARM-based accounts management is built on Azure, enables to apply access control to all services with role-based access control (Azure RBAC) natively.
* You can create an Azure Video Indexer **classic** account through our [API](https://aka.ms/avam-dev-portal). * You can create an Azure Video Indexer **ARM-based** account through one of the following:
The article also covers [Linking an Azure Video Indexer account to Azure Governm
This user should be an Azure AD user with a work or school account. Don't use a personal account, such as outlook.com, live.com, or hotmail.com.
- ![all Azure AD users](./media/create-account/all-aad-users.png)
+ :::image type="content" alt-text="Screenshot that shows how to choose a user in your Azure A D domain." source="./media/create-account/all-aad-users.png":::
### Additional prerequisites for automatic flow
The article also covers [Linking an Azure Video Indexer account to Azure Governm
This user should be a member in your Azure subscription with either an **Owner** role, or both **Contributor** and **User Access Administrator** roles. A user can be added twice, with two roles. Once with Contributor and once with user Access Administrator. For more information, see [View the access a user has to Azure resources](../role-based-access-control/check-access.md).
- ![access control](./media/create-account/access-control-iam.png)
+ :::image type="content" alt-text="Screenshot that shows the access control settings." source="./media/create-account/access-control-iam.png":::
### Additional prerequisites for manual flow
The article also covers [Linking an Azure Video Indexer account to Azure Governm
Search for **Microsoft.Media** and **Microsoft.EventGrid**. If not in the "Registered" state, click **Register**. It takes a couple of minutes to register.
- ![EventGrid](./media/create-account/event-grid.png)
+ :::image type="content" alt-text="Screenshot that shows how to select an event grid subscription." source="./media/create-account/event-grid.png":::
## Connect to Azure manually (advanced option)
If the connection to Azure failed, you can attempt to troubleshoot the problem b
Make sure the Media Services account was created with the classic APIs.
- ![Media Services classic API](./media/create-account/enable-classic-api.png)
-
+ :::image type="content" alt-text="Screenshot that shows how to use the classic API." source="./media/create-account/enable-classic-api.png":::
When creating a storage account for your Media Services account, select **StorageV2** for account kind and **Geo-redundant** (GRS) for replication fields.
- ![New AMS account](./media/create-account/create-new-ams-account.png)
+ :::image type="content" alt-text="Screenshot that shows how to specify a storage account." source="./media/create-account/create-new-ams-account.png":::
> [!NOTE] > Make sure to write down the Media Services resource and account names. You'll need them for the steps in the next section.
If the connection to Azure failed, you can attempt to troubleshoot the problem b
In the new Media Services account, select **Streaming endpoints**. Then select the streaming endpoint and press start.
- ![Streaming endpoints](./media/create-account/create-ams-account-se.png)
+ :::image type="content" alt-text="Screenshot that shows how to specify streaming endpoints." source="./media/create-account/create-ams-account-se.png":::
4. For Azure Video Indexer to authenticate with Media Services API, an AD app needs to be created. The following steps guide you through the Azure AD authentication process described in [Get started with Azure AD authentication by using the Azure portal](/azure/azure/media-services/previous/media-services-portal-get-started-with-aad): 1. In the new Media Services account, select **API access**.
To import your data, follow the steps:
* If the account ID isn't showing, you can copy and paste the account ID from Azure portal or the account list, on the side blade in the Azure Video Indexer Portal. 5. Click **Import content**
-![import](./media/create-account/import-steps.png)
-
+ :::image type="content" alt-text="Screenshot that shows how to import your data." source="./media/create-account/import-to-arm-account.png":::
All media and content model customizations will be copied from the *trial* account into the new ARM-Based account.
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
After you've enabled VMs to be managed from Azure, you can install guest managem
- To enable guest management, customers will be required to use admin credentials - VMtools should already be running on the VM > [!NOTE]
-> Azure VMware Solution vCenter will be available in global search but will NOT be available in the list of vCenters for ARc for VMware.
+> Azure VMware Solution vCenter will be available in global search but will NOT be available in the list of vCenters for Arc for VMware.
- Customers can view the list of VM extensions available in public preview. - Change tracking
The following command invokes the set credential for the specified appliance res
Use the following steps to perform a manual upgrade for Arc appliance virtual machine (VM). 1. Log into vCenter.
-1. Locate the arc appliance VM, which should be in the resource pool that was configured during onboarding.
+1. Locate the Arc appliance VM, which should be in the resource pool that was configured during onboarding.
1. Power off the VM. 1. Delete the VM. 1. Delete the download template corresponding to the VM.
backup Backup Azure Database Postgresql Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-support-matrix.md
East US, East US 2, Central US, South Central US, West US, West US 2, West Centr
- Recommended limit for the maximum database size is 400 GB. - Cross-region backup isn't supported. Therefore, you can't back up an Azure PostgreSQL server to a vault in another region. Similarly, you can only restore a backup to a server within the same region as the vault. However, we support cross-subscription backup and restore.
+- Backup of Azure PostgreSQL servers with Private endpoint enabled is currently not supported.
- Only the data is recovered during restore; _roles_ aren't restored. ## Next steps -- [Back up Azure Database for PostgreSQL server](backup-azure-database-postgresql.md)
+- [Back up Azure Database for PostgreSQL server](backup-azure-database-postgresql.md)
backup Backup Azure Database Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql.md
# Azure Database for PostgreSQL backup with long-term retention
-This article describes how to back up Azure Database for PostgreSQL server.
+This article describes how to back up Azure Database for PostgreSQL server. Before you begin, review the [supported configurations, feature considerations and known limitations](https://docs.microsoft.com/azure/backup/backup-azure-database-postgresql-support-matrix)
## Configure backup on Azure PostgreSQL databases
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Multiple Backups Per Day | Supported (in preview), using *Enhanced policy* (
## Operating system support (Windows)
-The following table summarizes the supported operating systems when backing up Windows Azure VMs.
+The following table summarizes the supported operating systems when backing up Azure VMs running Windows.
**Scenario** | **OS support** |
Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Support
Windows Storage Spaces configuration of standalone Azure VMs | Supported [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for flexible orchestration model to back up and restore Single Azure VM. Restore with Managed identities | Yes, supported for managed Azure VMs, and not supported for classic and unmanaged Azure VMs. <br><br> Cross Region Restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public and national cloud regions. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).
-<a name="tvm-backup">Trusted Launch VM</a> | Backup supported. <br><br> Backup of Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through [Recovery Services vault](./backup-azure-arm-vms-prepare.md), [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Create VM blade](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br> <ul><li> Backup is supported in all regions where Trusted Launch VM is available. </li><li> Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. </li><li> Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn about how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-vm). </li></ul>
+<a name="tvm-backup">Trusted Launch VM</a> | Backup supported. <br><br> Backup of Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through [Recovery Services vault](./backup-azure-arm-vms-prepare.md), [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Create VM blade](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br> <ul><li> Backup is supported in all regions where Trusted Launch VM is available. </li><li> Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. </li><li> Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn about how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-launch-vm). </li></ul>
+ ## VM storage support
Network traffic to Azure:
Data security: - When backing up Azure VMs, you need to set up encryption *within* the virtual machine.-- Azure Backup supports Azure Disk Encryption, which uses BitLocker on Windows virtual machines and us **dm-crypt** on Linux virtual machines.
+- Azure Backup supports Azure Disk Encryption, which uses BitLocker on virtual machines running Windows and uses **dm-crypt** on Linux virtual machines.
- On the back end, Azure Backup uses [Azure Storage Service encryption](../storage/common/storage-service-encryption.md), which protects data at rest. **Machine** | **In transit** | **At rest**
On-premises/Azure VMs with MABS | ![Yes][green] | ![Yes][green]
[green]: ./media/backup-support-matrix/green.png [yellow]: ./media/backup-support-matrix/yellow.png
-[red]: ./media/backup-support-matrix/red.png
+[red]: ./media/backup-support-matrix/red.png
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/Overview/language-support.md
Consider the following:
## Supporting multiple languages in one QnA Maker resource
-This functionality is not supported in our current Generally Available (GA) stable release. Check out QnA Maker managed to test out this functionality.
+This functionality is not supported in our current Generally Available (GA) stable release. Check out [question answering](https://docs.microsoft.com/azure/cognitive-services/language-service/question-answering/overview) to test out this functionality.
## Supporting multiple languages in one knowledge base
cognitive-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/sovereign-clouds.md
Previously updated : 11/09/2021 Last updated : 05/10/2022
Available to organizations with a business presence in China. See more informati
- Speech-to-text - Custom speech (Acoustic Model (AM) and Language Model (LM) adaptation) - [Speech Studio](https://speech.azure.cn/)
+ - [Pronunciation assessment](how-to-pronunciation-assessment.md)
- Text-to-speech - Standard voice - Neural voice
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
This container image has the following tags available. You can also find a full
# [Latest version](#tab/current) * Release notes for version `1.7.0`:
- * Update dependencies
+ * Update langauge detection engine, and fix the support of throttling rate for continuous accuracy mode
| Image Tags | Notes | ||:| | `latest` | |
-| `1.7.0-amd64-preview` | |
+| `1.8.0-amd64-preview` | |
# [Previous versions](#tab/previous) | Image Tags | Notes | ||:|
+| `1.7.0-amd64-preview` | |
| `1.6.1-amd64-preview` | | | `1.5.0-amd64-preview` | | | `1.3.0-amd64-preview` | |
communication-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/reference.md
Previously updated : 06/30/2021 Last updated : 05/09/2022
+<!-- Please keep this table synced with: https://github.com/Azure/Communication/blob/master/README.md -->
+ # Reference documentation overview
-The following table details the available Communication Services packages along with corresponding reference documentation:
+## External links and docs
+For each area, we have external pages to track and review our SDKs. You can consult the table below to find the matching page for your SDK of interest.
+
+| Area | JavaScript | .NET | Python | Java SE | iOS | Android | Other |
+| - | | - | | | | -- | - |
+| Azure Resource Manager | - | [NuGet](https://www.nuget.org/packages/Azure.ResourceManager.Communication) | [PyPi](https://pypi.org/project/azure-mgmt-communication/) | - | - | - | [Go via GitHub](https://github.com/Azure/azure-sdk-for-go/releases/tag/v46.3.0) |
+| Calling | [npm](https://www.npmjs.com/package/@azure/communication-calling) | - | - | - | [GitHub](https://github.com/Azure/Communication/releases) ([docs](/objectivec/communication-services/calling/)) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/) | - |
+| Chat | [npm](https://www.npmjs.com/package/@azure/communication-chat) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Chat) | [PyPi](https://pypi.org/project/azure-communication-chat/) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | - |
+| Common | [npm](https://www.npmjs.com/package/@azure/communication-common) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Common/) | N/A | [Maven](https://search.maven.org/search?q=a:azure-communication-common) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-common) | - |
+| Identity | [npm](https://www.npmjs.com/package/@azure/communication-identity) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Identity) | [PyPi](https://pypi.org/project/azure-communication-identity/) | [Maven](https://search.maven.org/search?q=a:azure-communication-identity) | - | - | - |
+| Network Traversal | [npm](https://www.npmjs.com/package/@azure/communication-network-traversal) | [NuGet](https://www.nuget.org/packages/Azure.Communication.NetworkTraversal) | [PyPi]https://pypi.org/project/azure-communication-networktraversal/) | [Maven](https://search.maven.org/search?q=a:azure-communication-networktraversal) | - | - | - |
+| Phone numbers | [npm](https://www.npmjs.com/package/@azure/communication-phone-numbers) | [NuGet](https://www.nuget.org/packages/Azure.Communication.phonenumbers) | [PyPi](https://pypi.org/project/azure-communication-phonenumbers/) | [Maven](https://search.maven.org/search?q=a:azure-communication-phonenumbers) | - | - | - |
+| Signaling | [npm](https://www.npmjs.com/package/@azure/communication-signaling) | - | | - | - | - | - |
+| SMS | [npm](https://www.npmjs.com/package/@azure/communication-sms) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Sms) | [PyPi](https://pypi.org/project/azure-communication-sms/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-sms) | - | - | - |
-<!--note that this table also exists here and should be synced: https://github.com/Azure/Communication/blob/master/README.md -->
+## Class/method documentation
-| Area | JavaScript | .NET | Python | Java SE | iOS | Android | Other |
-| -- | - | - | | - | -- | -- | |
-| Azure Resource Manager | - | [NuGet](https://www.nuget.org/packages/Azure.ResourceManager.Communication) | [PyPi](https://pypi.org/project/azure-mgmt-communication/) | - | - | - | [Go via GitHub](https://github.com/Azure/azure-sdk-for-go/releases/tag/v46.3.0) |
-| Common | [npm](https://www.npmjs.com/package/@azure/communication-common) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Common/) | N/A | [Maven](https://search.maven.org/search?q=a:azure-communication-common) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-common) | - |
-| Identity | [npm](https://www.npmjs.com/package/@azure/communication-identity) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Identity) | [PyPi](https://pypi.org/project/azure-communication-identity/) | [Maven](https://search.maven.org/search?q=a:azure-communication-identity) | - | - | - |
-| Phone numbers | [npm](https://www.npmjs.com/package/@azure/communication-phone-numbers) | [NuGet](https://www.nuget.org/packages/Azure.Communication.phonenumbers) | [PyPi](https://pypi.org/project/azure-communication-phonenumbers/) | [Maven](https://search.maven.org/search?q=a:azure-communication-phonenumbers) | - | - | - |
-| Chat | [npm](https://www.npmjs.com/package/@azure/communication-chat) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Chat) | [PyPi](https://pypi.org/project/azure-communication-chat/) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | [GitHub](https://github.com/Azure/azure-sdk-for-ios/releases) | [Maven](https://search.maven.org/search?q=a:azure-communication-chat) | - |
-| SMS | [npm](https://www.npmjs.com/package/@azure/communication-sms) | [NuGet](https://www.nuget.org/packages/Azure.Communication.Sms) | [PyPi](https://pypi.org/project/azure-communication-sms/) | [Maven](https://search.maven.org/artifact/com.azure/azure-communication-sms) | - | - | - |
-| Calling | [npm](https://www.npmjs.com/package/@azure/communication-calling) | - | - | - | [GitHub](https://github.com/Azure/Communication/releases) ([docs](/objectivec/communication-services/calling/)) | [Maven](https://search.maven.org/artifact/com.azure.android/azure-communication-calling/) | - |
-| Reference Documentation | [docs](https://azure.github.io/azure-sdk-for-js/communication.html) | [docs](https://azure.github.io/azure-sdk-for-net/communication.html) | - | [docs](http://azure.github.io/azure-sdk-for-java/communication.html) | - | - | - |
+Additionally, the class/reference information for each SDK can be found below:
+- [JavaScript](https://azure.github.io/azure-sdk-for-js/communication.html)
+- [.NET](https://azure.github.io/azure-sdk-for-net/communication.html)
+- [Java](http://azure.github.io/azure-sdk-for-java/communication.html)
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/concepts.md
A Job is a unit of work (demand), which must be routed to an available Worker (s
A real-world example is an incoming call or chat in the context of a call center.
-### Job submission flow
+### Job lifecycle
1. Your application submits a Job via the Job Router SDK.
-2. The Job is classified and a [JobClassified Event][job_classified_event] is sent via Event Grid.
+1. (Optional) If you specified a [Classification Policy](#classification-policy), the Job is classified and a [JobClassified Event][job_classified_event] is sent via Event Grid.
+1. The Job is added to the queue that you specified or that was determined by the Classification Policy, and a [JobQueued Event][job_queued_event] is sent via Event Grid.
+1. Job Router searches for matching workers based upon any [Label selectors](#label-selectors) and the [Distribution Policy](#distribution-policy) if the queue.
+1. When a matching Worker is found, an [Offer](#offer) is issued and an [OfferIssued Event][offer_issued_event] is sent.
+1. Your application can accept the [Offer](#offer) via the SDK and the Job will be removed from the queue and an [OfferAccepted Event][offer_accepted_event] will be sent that contains an `assignmentId`.
+1. Once the Worker has completed the Job, the SDK can be used to complete and close it, using the `assignmentId`. This will free the Worker up to take on the next Job.
- :::image type="content" source="../media/router/acs-router-job-submission.png" alt-text="Diagram of job submission.":::
## Worker
-A Worker is the supply available to handle a Job. Each worker registers with one or more queues to receive jobs.
+A Worker is the supply available to handle a Job. When you use the SDK to register a Worker to receive jobs, you can specify:
-A real-world example is an agent in a call center.
-
-### Worker registration flow
+- One or more queues to listen on.
+- The number of concurrent jobs per [Channel](#channel) that the Worker can handle.
+- A set of [Labels](#labels) that can be used to group and [select](#label-selectors) workers.
-1. When your Worker is ready to take on work, you can register the worker via the Job Router SDK.
-2. Job Router then sends a [WorkerRegistered Event][worker_registered_event]
-
- :::image type="content" source="../media/router/acs-router-worker-registration.png" alt-text="Diagram of worker registration.":::
+A real-world example is an agent in a call center.
## Queue
-A Queue is an ordered list of jobs, that are waiting to be served by a worker. Workers register with a queue to receive work from it.
+A Queue is an ordered list of jobs, that are waiting to be served to a worker. Workers register with a queue to receive work from it.
A real-world example is a call queue in a call center.
An exception policy controls the behavior of a Job based on a trigger and execut
[worker_registered_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerregistered [worker_deregistered_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerderegistered [job_classified_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterjobclassified
+[job_queued_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterjobqueued
[offer_issued_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferissued [offer_accepted_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferaccepted [offer_declined_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferdeclined [offer_expired_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferexpired [offer_revoked_event]: ../../how-tos/router-sdk/subscribe-events.md#microsoftcommunicationrouterworkerofferrevoked
-[worker-scoring]: ../../how-tos/router-sdk/customize-worker-scoring.md
+[worker-scoring]: ../../how-tos/router-sdk/customize-worker-scoring.md
connectors Apis List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/apis-list.md
To create or manage logic app resources and connections, you need certain permis
Before you can use a connector's triggers or actions in your workflow, most connectors require that you first create a *connection* to the target service or system. To create a connection from within a logic app workflow, you have to authenticate your identity with account credentials and sometimes other connection information. For example, before your workflow can access and work with your Office 365 Outlook email account, you must authorize a connection to that account. For a small number of built-in operations and managed connectors, you can [set up and use a managed identity for authentication](../logic-apps/create-managed-service-identity.md#triggers-actions-managed-identity), rather than provide your credentials.
-<a name="connection-security-encyrption"></a>
+<a name="connection-security-encryption"></a>
### Connection security and encryption
connectors Connectors Sftp Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-sftp-ssh.md
ms.suite: integration
Previously updated : 02/02/2022 Last updated : 05/06/2022 tags: connectors
The following list describes key SFTP-SSH capabilities that differ from the SFTP
* Caches the connection to SFTP server *for up to 1 hour*. This capability improves performance and reduces how often the connector tries connecting to the server. To set the duration for this caching behavior, edit the [**ClientAliveInterval** property](https://man.openbsd.org/sshd_config#ClientAliveInterval) in the SSH configuration on your SFTP server.
+## How SFTP-SSH triggers work
+
+<a name="polling-behavior"></a>
+
+### Polling behavior
+
+SFTP-SSH triggers poll the SFTP file system and look for any file that changed since the last poll. Some tools let you preserve the timestamp when the files change. In these cases, you have to disable this feature so your trigger can work. Here are some common settings:
+
+| SFTP client | Action |
+|-|--|
+| Winscp | Go to **Options** > **Preferences** > **Transfer** > **Edit** > **Preserve timestamp** > **Disable** |
+| FileZilla | Go to **Transfer** > **Preserve timestamps of transferred files** > **Disable** |
+|||
+
+When a trigger finds a new file, the trigger checks that the new file is complete, and not partially written. For example, a file might have changes in progress when the trigger checks the file server. To avoid returning a partially written file, the trigger notes the timestamp for the file that has recent changes, but doesn't immediately return that file. The trigger returns the file only when polling the server again. Sometimes, this behavior might cause a delay that is up to twice the trigger's polling interval.
+
+<a name="trigger-recurrence-shift-drift"></a>
+
+## Trigger recurrence shift and drift (daylight saving time)
+
+Recurring connection-based triggers where you need to create a connection first, such as the managed SFTP-SSH trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). In recurring connection-based triggers, the recurrence schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior. For example, unexpected behavior can include failure to maintain the specified schedule when daylight saving time (DST) starts and ends.
+
+To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence. That way, your workflow continues to run at the expected time or specified start time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../connectors/apis-list.md#recurrence-for-connection-based-triggers).
+ ## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * Your SFTP server address and account credentials, so your workflow can access your SFTP account. You also need access to an SSH private key and the SSH private key password. To upload large files using chunking, you need both read and write access for the root folder on your SFTP server. Otherwise, you get a "401 Unauthorized" error.
- The SFTP-SSH connector supports both private key authentication and password authentication. However, the SFTP-SSH connector supports *only* these private key formats, encryption algorithms, fingerprints, and key exchange algorithms:
+ The SFTP-SSH connector supports both private key authentication and password authentication. However, the SFTP-SSH connector supports *only* the following private key formats, key exchange algorithms, encryption algorithms, and fingerprints:
* **Private key formats**: RSA (Rivest Shamir Adleman) and DSA (Digital Signature Algorithm) keys in both OpenSSH and ssh.com formats. If your private key is in PuTTY (.ppk) file format, first [convert the key to the OpenSSH (.pem) file format](#convert-to-openssh).
+ * **Key exchange algorithms**: Review [Key Exchange Method - SSH.NET](https://github.com/sshnet/SSH.NET#key-exchange-method).
* **Encryption algorithms**: Review [Encryption Method - SSH.NET](https://github.com/sshnet/SSH.NET#encryption-method). * **Fingerprint**: MD5
- * **Key exchange algorithms**: Review [Key Exchange Method - SSH.NET](https://github.com/sshnet/SSH.NET#key-exchange-method).
After you add an SFTP-SSH trigger or action to your workflow, you have to provide connection information for your SFTP server. When you provide your SSH private key for this connection, ***don't manually enter or edit the key***, which might cause the connection to fail. Instead, make sure that you ***copy the key*** from your SSH private key file, and ***paste*** that key into the connection details. For more information, see the [Connect to SFTP with SSH](#connect) section later this article.
The following list describes key SFTP-SSH capabilities that differ from the SFTP
* The logic app workflow where you want to access your SFTP account. To start with an SFTP-SSH trigger, [create a blank logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md). To use an SFTP-SSH action, start your workflow with another trigger, for example, the **Recurrence** trigger.
-## How SFTP-SSH triggers work
+## Considerations
-<a name="polling-behavior"></a>
+The following section describes considerations to review when you use this connector's triggers and actions.
-### Polling behavior
+<a name="different-folders-trigger-processing-file-storage"></a>
-SFTP-SSH triggers poll the SFTP file system and look for any file that changed since the last poll. Some tools let you preserve the timestamp when the files change. In these cases, you have to disable this feature so your trigger can work. Here are some common settings:
+### Use different SFTP folders for file upload and processing
-| SFTP client | Action |
-|-|--|
-| Winscp | Go to **Options** > **Preferences** > **Transfer** > **Edit** > **Preserve timestamp** > **Disable** |
-| FileZilla | Go to **Transfer** > **Preserve timestamps of transferred files** > **Disable** |
-|||
+On your SFTP server, use separate folders for storing uploaded files and for the trigger to monitor those files for processing. Otherwise, the trigger won't fire and behaves unpredictably, for example, skipping a random number of files that the trigger processes. However, this requirement means that you need a way to move files between those folders.
-When a trigger finds a new file, the trigger checks that the new file is complete, and not partially written. For example, a file might have changes in progress when the trigger checks the file server. To avoid returning a partially written file, the trigger notes the timestamp for the file that has recent changes, but doesn't immediately return that file. The trigger returns the file only when polling the server again. Sometimes, this behavior might cause a delay that is up to twice the trigger's polling interval.
+If this trigger problem happens, remove the files from the folder that the trigger monitors, and use a different folder to store the uploaded files.
-<a name="trigger-recurrence-shift-drift"></a>
+<a name="create-file"></a>
-## Trigger recurrence shift and drift (daylight saving time)
+### Create file
-Recurring connection-based triggers where you need to create a connection first, such as the managed SFTP-SSH trigger, differ from built-in triggers that run natively in Azure Logic Apps, such as the [Recurrence trigger](../connectors/connectors-native-recurrence.md). In recurring connection-based triggers, the recurrence schedule isn't the only driver that controls execution, and the time zone only determines the initial start time. Subsequent runs depend on the recurrence schedule, the last trigger execution, *and* other factors that might cause run times to drift or produce unexpected behavior. For example, unexpected behavior can include failure to maintain the specified schedule when daylight saving time (DST) starts and ends.
+To create a file on your SFTP server, you can use the SFTP-SSH **Create file** action. When this action creates the file, the Logic Apps service also automatically calls your SFTP server to get the file's metadata. However, if you move the newly created file before the Logic Apps service can make the call to get the metadata, you get a `404` error message, `'A reference was made to a file or folder which does not exist'`. To skip reading the file's metadata after file creation, follow the steps to [add and set the **Get all file metadata** property to **No**](#file-does-not-exist).
-To make sure that the recurrence time doesn't shift when DST takes effect, manually adjust the recurrence. That way, your workflow continues to run at the expected time or specified start time. Otherwise, the start time shifts one hour forward when DST starts and one hour backward when DST ends. For more information, see [Recurrence for connection-based triggers](../connectors/apis-list.md#recurrence-for-connection-based-triggers).
+> [!IMPORTANT]
+> If you use chunking with SFTP-SSH operations that create files on your SFTP server,
+> these operations create temporary `.partial` and `.lock` files. These files help
+> the operations use chunking. Don't remove or change these files. Otherwise,
+> the file operations fail. When the operations finish, they delete the temporary files.
<a name="convert-to-openssh"></a>
The PuTTY format and OpenSSH format use different file name extensions. The PuTT
### Windows OS
-1. If you haven't done so already, [download the latest PuTTY Generator (puttygen.exe) tool](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html), and then launch the tool.
+1. If you haven't done so already, [download the latest PuTTY Generator (puttygen.exe) tool](https://www.puttygen.com), and then open the tool.
-1. On this screen, select **Load**.
+1. In the PuTTY Key Generator tool (puttygen.exe), under **Actions**, select **Load**.
- ![Select "Load"](./media/connectors-sftp-ssh/puttygen-load.png)
+ ![Screenshot showing the PuTTY Key Generator tool and the "Actions" section with "Load" selected.](./media/connectors-sftp-ssh/puttygen-load.png)
1. Browse to your private key file in PuTTY format, and select **Open**. 1. From the **Conversions** menu, select **Export OpenSSH key**.
- ![Select "Export OpenSSH key"](./media/connectors-sftp-ssh/export-openssh-key.png)
+ ![Screenshot showing the PuTTY Generator tool with the "Conversions" menu open and "Export OpenSSH key" selected.](./media/connectors-sftp-ssh/export-openssh-key.png)
-1. Save the private key file with the `.pem` file name extension.
+1. Save the private key file with the **.pem** file name extension.
-## Considerations
+## Find the MD5 fingerprint
-This section describes considerations to review when you use this connector's triggers and actions.
+The SFTP-SSH connector rejects a connection if both the SFTP server's fingerprint and expected fingerprint don't match. To get the MD5 fingerprint, which is a sequence with 16 pairs of hex digits delimited by colons, try the following options.
-<a name="different-folders-trigger-processing-file-storage"></a>
+### You have the key
-### Use different SFTP folders for file upload and processing
+The MD5 key is a 47-character string delimited by colons. To get the MD5 fingerprint when you have the key, you can use tools such as `ssh-keygen`, for example:
-On your SFTP server, use separate folders for storing uploaded files and for the trigger to monitor those files for processing. Otherwise, the trigger won't fire and behaves unpredictably, for example, skipping a random number of files that the trigger processes. However, this requirement means that you need a way to move files between those folders.
+```bash
+ssh-keygen -l -f id_rsa.pub -E md5
+```
-If this trigger problem happens, remove the files from the folder that the trigger monitors, and use a different folder to store the uploaded files.
+### You don't have the key
-<a name="create-file"></a>
+To get an MD5 fingerprint when you don't have a key, you can use the latest [Server and Protocol Information Dialog tool by WinSCP](https://winscp.net/eng/docs/ui_fsinfo), or you can use the PuTTY Configuration tool instead:
-### Create file
+1. In the PuTTY Configuration tool (putty.exe), in the **Category** window, open **Connection** > **SSH** > **Host keys**.
-To create a file on your SFTP server, you can use the SFTP-SSH **Create file** action. When this action creates the file, the Logic Apps service also automatically calls your SFTP server to get the file's metadata. However, if you move the newly created file before the Logic Apps service can make the call to get the metadata, you get a `404` error message, `'A reference was made to a file or folder which does not exist'`. To skip reading the file's metadata after file creation, follow the steps to [add and set the **Get all file metadata** property to **No**](#file-does-not-exist).
+1. Under **Host key algorithm preference**, in the **Algorithm selection policy** list, check that **RSA** appears at the top.
-> [!IMPORTANT]
-> If you use chunking with SFTP-SSH operations that create files on your SFTP server,
-> these operations create temporary `.partial` and `.lock` files. These files help
-> the operations use chunking. Don't remove or change these files. Otherwise,
-> the file operations fail. When the operations finish, they delete the temporary files.
+1. If **RSA** doesn't appear at the top, select **RSA**, and then select **Up** until **RSA** moves to the top.
+
+ ![Screenshot showing the PuTTY Configuration tool, "Connection" category expanded to show "Host keys" selected. On right pane, "RSA" and "Up" button appear selected.](media/connectors-sftp-ssh/putty-select-rsa-key.png)
+
+1. Connect to your SFTP server with PuTTY. After the connection is created, when the PUTTY security alert appears, select **More info**.
+
+ ![Screenshot showing the PuTTY terminal and security alert with "More info" selected.](media/connectors-sftp-ssh/putty-security-alert-more-info.png)
+
+ > [!TIP]
+ >
+ > If the security alert doesn't appear, try clearing the **SshHostKeys** entry. Open the Windows registry editor,
+ > and browse to the following entry:
+ >
+ > **Computer\HKEY_CURRENT_USER\Software\SimonTatham\PuTTY\SshHostKeys**
+
+1. After the **PuTTY: information about the server's host key** box appears, find the **MD5 fingerprint** property, and copy the *47-character string value*, for example.
+
+ ![Screenshot showing the more information box with the "MD5 fingerprint" property and the string with the last 47 characters selected for copying.](medi5-fingerprint-key.png)
<a name="connect"></a>
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
Previously updated : 05/05/2022 Last updated : 05/10/2022 # Dapr integration with Azure Container Apps
scopes:
- subscriber-app ```
-## Current supported Dapr version
-
-Azure Container Apps supports Dapr version 1.4.2.
-
-Version upgrades are handled transparently by Azure Container Apps. You can find the current version via the Azure portal and the CLI. See [known limitations](#limitations) around versioning.
- ## Limitations ### Unsupported Dapr capabilities
container-apps Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/health-probes.md
Health probes in Azure Container Apps are based on [Kubernetes health probes](ht
Container Apps support the following probes: -- [Liveness](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command): Reports the overall health of your replica.-- [Startup](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes): Delay reporting on a liveness or readiness state for slower apps with a startup probe.-- [Readiness](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes): Signals that a replica is ready to accept traffic.
+- **Liveness**: Reports the overall health of your replica.
+- **Readiness**: Signals that a replica is ready to accept traffic.
+- **Startup**: Delay reporting on a liveness or readiness state for slower apps with a startup probe.
+ For a full listing of the specification supported in Azure Container Apps, refer to [Azure REST API specs](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/app/resource-manager/Microsoft.App/stable/2022-03-01/CommonDefinitions.json#L119-L236).
containers:
-The optional [failureThreshold](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes) setting defines the number of attempts Kubernetes tries if the probe if execution fails. Attempts that exceed the `failureThreshold` amount cause different results for each probe. Refer to [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes) for details.
+The optional `failureThreshold` setting defines the number of attempts Container Apps tries if the probe if execution fails. Attempts that exceed the `failureThreshold` amount cause different results for each probe.
+
+## Default configuration
+
+Container Apps offers default probe settings if no probes are defined. If your app takes an extended amount of time to start, which is very common in Java, you often need to customize the probes so your container won't crash.
+
+The following example demonstrates how to configure the liveness and readiness probes in order to extend the startup times.
+
+```json
+"probes": [
+ {
+ "type": "liveness",
+ "failureThreshold": 3,
+ "periodSeconds": 10,
+ "successThreshold": 1,
+ "tcpSocket": {
+ "port": 80
+ },
+ "timeoutSeconds": 1
+ },
+ {
+ "type": "readiness",
+ "failureThreshold": 48,
+ "initialDelaySeconds": 3,
+ "periodSeconds": 5,
+ "successThreshold": 1,
+ "tcpSocket": {
+ "port": 80
+ },
+ "timeoutSeconds": 5
+ }
+```
## Next steps
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
There are two scale properties that apply to all rules in your container app:
| Scale property | Description | Default value | Min value | Max value | ||||||
-| `minReplicas` | Minimum number of replicas running for your container app. | 0 | 1 | 10 |
+| `minReplicas` | Minimum number of replicas running for your container app. | 0 | 0 | 10 |
| `maxReplicas` | Maximum number of replicas running for your container app. | n/a | 1 | 10 | - If your container app scales to zero, then you aren't billed.
There are two scale properties that apply to all rules in your container app:
- Changes to scaling rules are a [revision-scope](./revisions.md#revision-scope-changes) change. - When using non-HTTP event scale rules, setting the `activeRevisionMode` to `single` is recommended.
-> [!IMPORTANT]
-> Replica quantities are a target amount, not a guarantee. Even if you set `maxReplicas` to `1`, there is no assurance of thread safety.
+ ## Scale triggers
container-instances Container Instances Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-nat-gateway.md
az container create \
> Adjust the value of `--subnet address-prefix` for the IP address space you need in your subnet. The smallest supported subnet is /29, which provides eight IP addresses. Some >IP addresses are reserved for use by Azure, which you can read more about [here](../virtual-network/ip-services/private-ip-addresses.md). ## Create a public IP address
-In the following sections, use the Azure CLI to deploy an Azure NAT gateway in the virtual network. For background, see [Tutorial: Create a NAT gateway using Azure CLI](../virtual-network/nat-gateway/tutorial-create-nat-gateway-cli.md).
+In the following sections, use the Azure CLI to deploy an Azure NAT gateway in the virtual network. For background, see [Quickstart: Create a NAT gateway using Azure CLI](../virtual-network/nat-gateway/quickstart-create-nat-gateway-cli.md).
First, use the [az network vnet public-ip create][az-network-public-ip-create] to create a public IP address for the NAT gateway. This will be used to access the Internet. You will receive a warning about an upcoming breaking change where Standard SKU IP addresses will be availability zone aware by default. You can learn more about the use of availability zones and public IP addresses [here](../virtual-network/ip-services/virtual-network-network-interface-addresses.md).
container-registry Container Registry Firewall Access Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-firewall-access-rules.md
If your registry is [geo-replicated](container-registry-geo-replication.md), a c
> [!NOTE] > Azure Container Registry is introducing [dedicated data endpoints](#enable-dedicated-data-endpoints), allowing you to tightly scope client firewall rules for your registry storage. Optionally enable data endpoints in all regions where the registry is located or replicated, using the form `<registry-name>.<region>.data.azurecr.io`.
+ ## About Registry FQDN's
+
+Registry has two FQDN's, the **login url** and the **data endpoint**.
+
+* Both the **login url** and the **data endpoint** are accessible from within the virtual network, using private IP's by enabling a private link.
+* A registry that does not use data endpoints would have to access the data from an endpoint of the form `*.blob.core.windows.net` and does not provide the isolation required when configuring firewall rules.
+* A registry with a private link enabled gets the dedicated data endpoint automatically.
+* A dedicated data endpoint is created per region for a registry.
+* Login url remains the same irrespective of whether data endpoint is enabled or disabled.
## Allow access by IP address range If your organization has policies to allow access only to specific IP addresses or address ranges, download [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519).
data-factory Solution Template Extract Data From Pdf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-extract-data-from-pdf.md
This template defines 4 parameters:
## How to use this solution template
-1. Go to template **Extract data from PDF**. Create a **New** connection to your source storage store or choose an existing connection. The source storage store is where you want to copy files from.
+1. Go to template **Extract data from PDF**. Create a **New** connection to your Form Recognizer resource or choose an existing connection.
- :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-1.png" alt-text="Screenshot of how to create a new connection or select an existing connection from a drop down menu to the source in template set up.":::
-
-2. Create a **New** connection to your destination storage store or choose an existing connection.
-
- :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-2.png" alt-text="Screenshot of how to create a new connection or select existing connection from a drop down menu to Form Recognizer in template set up.":::
+ :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-1.png" alt-text="Screenshot of how to create a new connection or select an existing connection from a drop down menu to Form Recognizer in template set up.":::
In your connection to Form Recognizer, make sure to add a **Linked service Parameter**. You will need to use this parameter as your dynamic **Base URL**. :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-9.png" alt-text="Screenshot of where to add your Form Recognizer linked service parameter."::: :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-8.png" alt-text="Screenshot of the linked service base URL that references the linked service parameter.":::+
+2. Create a **New** connection to your destination storage store or choose an existing connection.
+
+ :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-2.png" alt-text="Screenshot of how to create a new connection or select existing connection from a drop down menu to your sink in template set up.":::
3. Select **Use this template**.
data-factory Tutorial Managed Virtual Network Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-migrate.md
Last updated 05/08/2022
Managed virtual network provides a secure and manageable data integration solution. With managed virtual network, you can create the Azure integration runtime as part of a managed virtual network and use private endpoints to securely connect to supported data stores. Data traffic goes through Azure private links that provide secured connectivity to the data source. In addition, it prevents data exfiltration to the public internet. This tutorial provides steps to move existing Azure integration runtime to an Azure integration runtime in a managed virtual network.
-## Steps to move existing Azure integration runtime to an Azure integration runtime in a managed virtual network
+## Azure Data Factory
+For Azure Data Factory, you can move existing Azure integration runtime directly by following steps:
1. Enable managed virtual network on your Azure integration runtime. You can enable it either on a new Azure integration time or an existing one. :::image type="content" source="./media/tutorial-managed-virtual-network/enable-managed-virtual network.png" alt-text="Screenshot of enabling managed virtual network during the creation or edit Azure integration runtime.":::
This tutorial provides steps to move existing Azure integration runtime to an Az
:::image type="content" source="./media/tutorial-managed-virtual-network/modify-linked-service.png" alt-text="Screenshot of modifying the integration runtime reference in the linked service.":::
+## Azure Synapse Analytics
+For Azure Synapse Analytics, Azure integration runtime can't be moved directly in existing workspace. You need to create a new workspace with a managed workspace virtual network. In new workspace, Azure integration runtime is in a managed virtual network and you can reference it in the linked service.
## Next steps
defender-for-cloud Auto Deploy Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-vulnerability-assessment.md
To assess your machines for vulnerabilities, you can use one of the following so
Defender for Cloud also offers vulnerability assessment for your: - SQL databases - see [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - see [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-container-registries-usage.md)
+- Azure Container Registry images - see [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-usage.md)
defender-for-cloud Custom Dashboards Azure Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-dashboards-azure-workbooks.md
Learn more about using these scanners:
- [Find vulnerabilities with Microsoft threat and vulnerability management](deploy-vulnerability-assessment-tvm.md) - [Find vulnerabilities with the integrated Qualys scanner](deploy-vulnerability-assessment-vm.md)-- [Scan your registry images for vulnerabilities](defender-for-container-registries-usage.md)
+- [Scan your registry images for vulnerabilities](defender-for-containers-usage.md)
- [Scan your SQL resources for vulnerabilities](defender-for-sql-on-machines-vulnerability-assessment.md) Findings for each resource type are reported in separate recommendations:
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Review the findings from these vulnerability scanners and respond to them all fr
Learn more on the following pages: - [Defender for Cloud's integrated Qualys scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md)-- [Identify vulnerabilities in images in Azure container registries](defender-for-container-registries-usage.md#identify-vulnerabilities-in-images-in-other-container-registries)
+- [Identify vulnerabilities in images in Azure container registries](defender-for-containers-usage.md#identify-vulnerabilities-in-images-in-other-container-registries)
## Optimize and improve security by configuring recommended controls
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
If you connect unsupported registries to your Azure subscription, Defender for C
### Can I customize the findings from the vulnerability scanner? Yes. If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't impact your secure score or generate unwanted noise.
-[Learn about creating rules to disable findings from the integrated vulnerability assessment tool](defender-for-container-registries-usage.md#disable-specific-findings).
+[Learn about creating rules to disable findings from the integrated vulnerability assessment tool](defender-for-containers-usage.md#disable-specific-findings).
### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry? Defender for Cloud provides vulnerability assessments for every image pushed or pulled in a registry. Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it'll expose security vulnerabilities.
Defender for Cloud provides vulnerability assessments for every image pushed or
## Next steps > [!div class="nextstepaction"]
-> [Scan your images for vulnerabilities](defender-for-container-registries-usage.md)
+> [Scan your images for vulnerabilities](defender-for-containers-usage.md)
defender-for-cloud Defender For Containers Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-cicd.md
+
+ Title: Defender for Cloud's vulnerability scanner for container images in CI/CD workflows
+description: Learn how to scan container images in CI/CD workflows with Microsoft Defender for container registries
Last updated : 11/09/2021+++++
+# Identify vulnerable container images in your CI/CD workflows
+
+This page explains how to scan your Azure Container Registry-based container images with the integrated vulnerability scanner when they're built as part of your GitHub workflows.
+
+To set up the scanner, you'll need to enable **Microsoft Defender for container registries** and the CI/CD integration. When your CI/CD workflows push images to your registries, you can view registry scan results and a summary of CI/CD scan results.
+
+The findings of the CI/CD scans are an enrichment to the existing registry scan findings by Qualys. Defender for Cloud's CI/CD scanning is powered by [Aqua Trivy](https://github.com/aquasecurity/trivy).
+
+YouΓÇÖll get traceability information such as the GitHub workflow and the GitHub run URL, to help identify the workflows that are resulting in vulnerable images.
+
+> [!TIP]
+> The vulnerabilities identified in a scan of your registry might differ from the findings of your CI/CD scans. One reason for these differences is that the registry scanning is [continuous](defender-for-container-registries-introduction.md#when-are-images-scanned), whereas the CI/CD scanning happens immediately before the workflow pushes the image into the registry.
+
+## Availability
+
+|Aspect|Details|
+|-|:-|
+|Release state:| **This CI/CD integration is in preview.**<br>We recommend that you experiment with it on non-production workflows only.<br>[!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]|
+|Pricing:|**Microsoft Defender for container registries** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
++
+## Prerequisites
+
+To scan your images as they're pushed by CI/CD workflows into your registries, you must have **Microsoft Defender for container registries** enabled on the subscription.
+
+## Set up vulnerability scanning of your CI/CD workflows
+
+To enable vulnerability scans of images in your GitHub workflows:
+
+[Step 1. Enable the CI/CD integration in Defender for Cloud](#step-1-enable-the-cicd-integration-in-defender-for-cloud)
+
+[Step 2. Add the necessary lines to your GitHub workflow](#step-2-add-the-necessary-lines-to-your-github-workflow-and-perform-a-scan)
+
+### Step 1. Enable the CI/CD integration in Defender for Cloud
+
+1. From Defender for Cloud's menu, open **Environment settings**.
+1. Select the relevant subscription.
+1. From the sidebar of the settings page for that subscription, select **Integrations**.
+1. In the pane that appears, select an Application Insights account to push the CI/CD scan results from your workflow.
+1. Copy the authentication token and connection string into your GitHub workflow.
+
+ :::image type="content" source="./media/defender-for-containers-cicd/enable-cicd-integration.png" alt-text="Enable the CI/CD integration for vulnerability scans of container images in your GitHub workflows." lightbox="./media/defender-for-containers-cicd/enable-cicd-integration.png":::
+
+ > [!IMPORTANT]
+ > The authentication token and connection string are used to correlate the ingested security telemetry with resources in the subscription. If you use invalid values for these parameters, it'll lead to dropped telemetry.
+
+### Step 2. Add the necessary lines to your GitHub workflow and perform a scan
+
+1. From your GitHub workflow, enable CI/CD scanning as follows:
+
+ > [!TIP]
+ > We recommend creating two secrets in your repository to reference in your YAML file as shown below. The secrets can be named according to your own naming conventions. In this example, the secrets are referenced as **AZ_APPINSIGHTS_CONNECTION_STRING** and **AZ_SUBSCRIPTION_TOKEN**.
+
+ > [!IMPORTANT]
+ > The push to the registry must happen prior to the results being published.
+
+ ```yml
+ - name: Build and Tag Image
+ run: |
+ echo "github.sha=$GITHUB_SHA"
+ docker build -t githubdemo1.azurecr.io/k8sdemo:${{ github.sha }} .
+
+ - uses: Azure/container-scan@v0
+ name: Scan image for vulnerabilities
+ id: container-scan
+ continue-on-error: true
+ with:
+ image-name: githubdemo1.azurecr.io/k8sdemo:${{ github.sha }}
+
+ - name: Push Docker image
+ run: |
+ docker push githubdemo1.azurecr.io/k8sdemo:${{ github.sha }}
+
+ - name: Post logs to appinsights
+ uses: Azure/publish-security-assessments@v0
+ with:
+ scan-results-path: ${{ steps.container-scan.outputs.scan-report-path }}
+ connection-string: ${{ secrets.AZ_APPINSIGHTS_CONNECTION_STRING }}
+ subscription-token: ${{ secrets.AZ_SUBSCRIPTION_TOKEN }}
+ ```
+
+1. Run the workflow that will push the image to the selected container registry. Once the image is pushed into the registry, a scan of the registry runs and you can view the CI/CD scan results along with the registry scan results within Microsoft Defender for Cloud. Running the above YAML file will install an instance of Aqua Security's [Trivy](https://github.com/aquasecurity/trivy) in your build system. Trivy is licensed under the Apache 2.0 License and has dependencies on data feeds, many of which contain their own terms of use.
+
+1. [View CI/CD scan results](#view-cicd-scan-results).
+
+## View CI/CD scan results
+
+1. To view the findings, open the **Recommendations** page. If issues were found, you'll see the recommendation [Container registry images should have vulnerability findings resolved (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
+
+ ![Recommendation to remediate issues .](media/monitor-container-security/acr-finding.png)
+
+1. Select the recommendation.
+
+ The recommendation details page opens with additional information. This information includes the list of registries with vulnerable images ("Affected resources") and the remediation steps.
+
+1. Open the **affected resources** list and select an unhealthy registry to see the repositories within it that have vulnerable images.
+
+ :::image type="content" source="media/defender-for-containers-cicd/select-registry.png" alt-text="Select an unhealthy registry.":::
+
+ The registry details page opens with the list of affected repositories.
+
+1. Select a specific repository to see the repositories within it that have vulnerable images.
+
+ :::image type="content" source="media/defender-for-containers-cicd/select-repository.png" alt-text="Select an unhealthy repository.":::
+
+ The repository details page opens. It lists the vulnerable images together with an assessment of the severity of the findings.
+
+1. Select a specific image to see the vulnerabilities.
+
+ :::image type="content" source="media/defender-for-containers-cicd/select-image.png" alt-text="Select an unhealthy image.":::
+
+ The list of findings for the selected image opens.
+
+ :::image type="content" source="media/defender-for-containers-cicd/cicd-scan-results.png" alt-text="Image scan results.":::
+
+1. To learn more about which GitHub workflow is pushing these vulnerable images, select the information bubble:
+
+ :::image type="content" source="media/defender-for-containers-cicd/cicd-findings.png" alt-text="CI/CD findings about specific GitHub branches and commits.":::
+
+## Next steps
+
+Learn more about the [advanced protection plans of Microsoft Defender for Cloud](defender-for-cloud-introduction.md).
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
On this page, you'll learn how you can use Defender for Containers to improve, m
| Release state: | General availability (GA)<br> Certain features are in preview, for a full list see the [availability](supported-machines-endpoint-solutions-clouds-containers.md) section. | | Feature availability | Refer to the [availability](supported-machines-endpoint-solutions-clouds-containers.md) section for additional information on feature release state and availability.| | Pricing: | **Microsoft Defender for Containers** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) |
-| Required roles and permissions: | ΓÇó To auto provision the required components, [Contributor](../role-based-access-control/built-in-roles.md#contributor), [Log Analytics Contributor](../role-based-access-control/built-in-roles.md#log-analytics-contributor), or [Azure Kubernetes Service Contributor Role](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-contributor-role). See also the [permissions for each of the components](enable-data-collection.md?tabs=autoprovision-containers)<br> ΓÇó **Security admin** can dismiss alerts<br> ΓÇó **Security reader** can view vulnerability assessment findings<br> See also [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md) |
+| Required roles and permissions: | ΓÇó To auto provision the required components, see the [permissions for each of the components](enable-data-collection.md?tabs=autoprovision-containers)<br> ΓÇó **Security admin** can dismiss alerts<br> ΓÇó **Security reader** can view vulnerability assessment findings<br> See also [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md) |
| Clouds: | **Azure**:<br>:::image type="icon" source="./medi#defender-for-containers-feature-availability). |
Defender for Containers includes an integrated vulnerability scanner for scannin
- When you import the image to your Azure Container Registry - Continuously in specific situations
-Learn more in [Vulnerability assessment](defender-for-container-registries-usage.md).
+Learn more in [Vulnerability assessment](defender-for-containers-usage.md).
:::image type="content" source="./media/defender-for-containers/recommendation-acr-images-with-vulnerabilities.png" alt-text="Sample Microsoft Defender for Cloud recommendation about vulnerabilities discovered in Azure Container Registry (ACR) hosted images." lightbox="./media/defender-for-containers/recommendation-acr-images-with-vulnerabilities.png":::
defender-for-cloud Defender For Containers Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-usage.md
+
+ Title: How to use Defender for Containers to identify vulnerabilities
+description: Learn how to use Defender for Containers to scan images in your registries
++ Last updated : 04/28/2022+++
+# Use Defender for Containers to scan your ACR images for vulnerabilities
+
+This page explains how to use the built-in vulnerability scanner to scan the container images stored in your Azure Resource Manager-based Azure Container Registry.
+
+When the scanner, powered by Qualys, reports vulnerabilities to Defender for Cloud, Defender for Cloud presents the findings and related information as recommendations. In addition, the findings include related information such as remediation steps, relevant CVEs, CVSS scores, and more. You can view the identified vulnerabilities for one or more subscriptions, or for a specific registry.
+
+> [!TIP]
+> You can also scan container images for vulnerabilities as the images are built in your CI/CD GitHub workflows. Learn more in [Identify vulnerable container images in your CI/CD workflows](defender-for-containers-cicd.md).
+
+There are four triggers for an image scan:
+
+- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image. To trigger the scan of an image, push it to your repository.
+
+- **Recently pulled** - Since new vulnerabilities are discovered every day, **Microsoft Defender for Containers** also scans, on a weekly basis, any image that has been pulled within the last 30 days. There's no extra charge for these rescans; as mentioned above, you're billed once per image.
+
+- **On import** - Azure Container Registry has import tools to bring images to your registry from Docker Hub, Microsoft Container Registry, or another Azure container registry. **Microsoft Defender for Containers** scans any supported images you import. Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md).
+
+- **Continuous scan**- This trigger has two modes:
+
+ - A continuous scan based on an image pull. This scan is performed every seven days after an image was pulled, and only for 30 days after the image was pulled. This mode doesn't require the security profile, or extension.
+
+ - (Preview) Continuous scan for running images. This scan is performed every seven days for as long as the image runs. This mode runs instead of the above mode when the Defender profile, or extension is running on the cluster.
+
+This scan typically completes within 2 minutes, but it might take up to 40 minutes. For every vulnerability identified, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue.
+
+Defender for Cloud filters, and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
+
+## Identify vulnerabilities in images in Azure container registries
+
+To enable vulnerability scans of images stored in your Azure Resource Manager-based Azure Container Registry:
+
+1. Enable **Defender for Containers** for your subscription. Defender for Cloud is now ready to scan images in your registries.
+
+ >[!NOTE]
+ > This feature is charged per image.
+
+ When a scan is triggered, findings are available as Defender for Cloud recommendations from 2 minutes up to 15 minutes after the scan is complete.
+
+1. [View and remediate findings as explained below](#view-and-remediate-findings).
+
+## Identify vulnerabilities in images in other container registries
+
+1. Use the ACR tools to bring images to your registry from Docker Hub or Microsoft Container Registry. When the import completes, the imported images are scanned by the built-in vulnerability assessment solution.
+
+ Learn more in [Import container images to a container registry](../container-registry/container-registry-import-images.md)
+
+ When the scan completes (typically after approximately 2 minutes, but can be up to 15 minutes), findings are available as Defender for Cloud recommendations.
+
+1. [View and remediate findings as explained below](#view-and-remediate-findings).
+
+## View and remediate findings
+
+1. To view the findings, open the **Recommendations** page. If issues were found, you'll see the recommendation [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
+
+ ![Recommendation to remediate issues .](media/monitor-container-security/acr-finding.png)
+
+1. Select the recommendation.
+
+ The recommendation details page opens with additional information. This information includes the list of registries with vulnerable images ("Affected resources") and the remediation steps.
+
+1. Select a specific registry to see the repositories within it that have vulnerable repositories.
+
+ ![Select a registry.](media/monitor-container-security/acr-finding-select-registry.png)
+
+ The registry details page opens with the list of affected repositories.
+
+1. Select a specific repository to see the repositories within it that have vulnerable images.
+
+ ![Select a repository.](media/monitor-container-security/acr-finding-select-repository.png)
+
+ The repository details page opens. It lists the vulnerable images together with an assessment of the severity of the findings.
+
+1. Select a specific image to see the vulnerabilities.
+
+ ![Select images.](media/monitor-container-security/acr-finding-select-image.png)
+
+ The list of findings for the selected image opens.
+
+ ![List of findings.](media/monitor-container-security/acr-findings.png)
+
+1. To learn more about a finding, select the finding.
+
+ The findings details pane opens.
+
+ [![Findings details pane.](media/monitor-container-security/acr-finding-details-pane.png)](media/monitor-container-security/acr-finding-details-pane.png#lightbox)
+
+ This pane includes a detailed description of the issue and links to external resources to help mitigate the threats.
+
+1. Follow the steps in the remediation section of this pane.
+
+1. When you've taken the steps required to remediate the security issue, replace the image in your registry:
+
+ 1. Push the updated image to trigger a scan.
+
+ 1. Check the recommendations page for the recommendation [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
+
+ If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
+
+ 1. When you're sure the updated image has been pushed, scanned, and is no longer appearing in the recommendation, delete the ΓÇ£oldΓÇ¥ vulnerable image from your registry.
+
+## Disable specific findings
+
+> [!NOTE]
+> [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
+
+If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
+
+When a finding matches the criteria you've defined in your disable rules, it won't appear in the list of findings. Typical scenarios include:
+
+- Disable findings with severity below medium
+- Disable findings that are non-patchable
+- Disable findings with CVSS score below 6.5
+- Disable findings with specific text in the security check or category (for example, ΓÇ£RedHatΓÇ¥, ΓÇ£CentOS Security Update for sudoΓÇ¥)
+
+> [!IMPORTANT]
+> To create a rule, you need permissions to edit a policy in Azure Policy.
+>
+> Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy).
+
+You can use any of the following criteria:
+
+- Finding ID
+- Category
+- Security check
+- CVSS v3 scores
+- Severity
+- Patchable status
+
+To create a rule:
+
+1. From the recommendations detail page for [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648), select **Disable rule**.
+1. Select the relevant scope.
+1. Define your criteria.
+1. Select **Apply rule**.
+
+ :::image type="content" source="./media/defender-for-containers-usage/new-disable-rule-for-registry-finding.png" alt-text="Create a disable rule for VA findings on registry.":::
+
+1. To view, override, or delete a rule:
+ 1. Select **Disable rule**.
+ 1. From the scope list, subscriptions with active rules show as **Rule applied**.
+ :::image type="content" source="./media/remediate-vulnerability-findings-vm/modify-rule.png" alt-text="Modify or delete an existing rule.":::
+ 1. To view or delete the rule, select the ellipsis menu ("...").
+
+## FAQ
+
+### How does Defender for Cloud scan an image?
+Defender for Cloud pulls the image from the registry and runs it in an isolated sandbox with the Qualys scanner. The scanner extracts a list of known vulnerabilities.
+
+Defender for Cloud filters and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying you when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
+
+### Can I get the scan results via REST API?
+Yes. The results are under [Sub-Assessments REST API](/rest/api/securitycenter/subassessments/list/). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
+
+### What registry types are scanned? What types are billed?
+For a list of the types of container registries supported by Microsoft Defender for container registries, see [Availability](defender-for-container-registries-introduction.md#availability).
+
+If you connect unsupported registries to your Azure subscription, Defender for Cloud won't scan them and won't bill you for them.
+
+### Can I customize the findings from the vulnerability scanner?
+Yes. If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't impact your secure score or generate unwanted noise.
+
+[Learn about creating rules to disable findings from the integrated vulnerability assessment tool](defender-for-containers-usage.md#disable-specific-findings).
+
+### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry?
+Some images may reuse tags from an image that was already scanned. For example, you may reassign the tag ΓÇ£LatestΓÇ¥ every time you add an image to a digest. In such cases, the ΓÇÿoldΓÇÖ image does still exist in the registry and may still be pulled by its digest. If the image has security findings and is pulled, it'll expose security vulnerabilities.
+
+## Next steps
+
+Learn more about the [advanced protection plans of Microsoft Defender for Cloud](defender-for-cloud-introduction.md).
defender-for-cloud Deploy Vulnerability Assessment Byol Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-byol-vm.md
When you set up your solution, you must choose a resource group to attach it to.
Defender for Cloud also offers vulnerability analysis for your: - SQL databases - see [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - see [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-container-registries-usage.md)
+- Azure Container Registry images - see [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-usage.md)
defender-for-cloud Deploy Vulnerability Assessment Tvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-tvm.md
The findings for **all** vulnerability assessment tools are in the Defender for
Defender for Cloud also offers vulnerability analysis for your: - SQL databases - see [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - see [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-container-registries-usage.md)
+- Azure Container Registry images - see [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-usage.md)
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
Within 48 hrs of the disclosure of a critical vulnerability, Qualys incorporates
Defender for Cloud also offers vulnerability analysis for your: - SQL databases - see [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - see [Use Defender for Containers to scan your ACR images for vulnerabilities](defender-for-container-registries-usage.md)
+- Azure Container Registry images - see [Use Defender for Containers to scan your ACR images for vulnerabilities](defender-for-containers-usage.md)
defender-for-cloud Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-data-collection.md
By default, auto provisioning is enabled when you enable Defender for Containers
| Aspect | Azure Kubernetes Service clusters | Azure Arc-enabled Kubernetes clusters | ||-||
-| Release state: | ΓÇó Defender profile is in preview<br> ΓÇó Azure Policy add-on: Generally available (GA) | ΓÇó Defender extension: Preview<br> ΓÇó Azure Policy extension: Preview |
+| Release state: | ΓÇó Defender profile: Preview<br> ΓÇó Azure Policy add-on: Generally available (GA) | ΓÇó Defender extension: Preview<br> ΓÇó Azure Policy extension: Preview |
| Relevant Defender plan: | [Microsoft Defender for Containers](defender-for-containers-introduction.md) | [Microsoft Defender for Containers](defender-for-containers-introduction.md) |
-| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) | [Owner](../role-based-access-control/built-in-roles.md#owner) |
-| Supported destinations: | The AKS Defender profile only supports [AKS clusters that have RBAC enabled](../aks/concepts-identity.md#kubernetes-rbac). | [Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters](defender-for-containers-introduction.md?tabs=defender-for-container-arch-aks#microsoft-defender-for-containers-plan-availability) |
+| Required roles and permissions (subscription-level): | [Owner](../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) | [Owner](../role-based-access-control/built-in-roles.md#owner) or [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) |
+| Supported destinations: | The AKS Defender profile only supports [AKS clusters that have RBAC enabled](../aks/concepts-identity.md#kubernetes-rbac). | [See Kubernetes distributions supported for Arc-enabled Kubernetes](supported-machines-endpoint-solutions-clouds-containers.md?tabs=azure-aks#kubernetes-distributions-and-configurations) |
| Policy-based: | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes | :::image type="icon" source="./media/icons/yes-icon.png"::: Yes | | Clouds: | **Defender profile**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet<br>**Azure Policy add-on**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government, Azure China 21Vianet|**Defender extension**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet<br>**Azure Policy extension for Azure Arc**:<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure Government, Azure China 21Vianet|
defender-for-cloud Endpoint Protection Recommendations Technical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/endpoint-protection-recommendations-technical.md
Defender for Cloud recommends **Endpoint protection health issues should be reso
Defender for Cloud recommends **Endpoint protection should be installed on your machines** when any of the following checks aren't met: -- File **/opt/isec/ens/threatprevention/bin/isecav** exists-- **"/opt/isec/ens/threatprevention/bin/isecav --version"** output is: **McAfee name = McAfee Endpoint Security for Linux Threat Prevention and McAfee version >= 10**
+- File **/opt/McAfee/ens/tp/bin/mfetpcli** exists
+- **"/opt/McAfee/ens/tp/bin/mfetpcli --version"** output is: **McAfee name = McAfee Endpoint Security for Linux Threat Prevention and McAfee version >= 10**
Defender for Cloud recommends **Endpoint protection health issues should be resolved on your machines** when any of the following checks aren't met: -- **"/opt/isec/ens/threatprevention/bin/isecav --listtask"** returns **Quick scan, Full scan** and both of the scans <= 7 days-- **"/opt/isec/ens/threatprevention/bin/isecav --listtask"** returns **DAT and engine Update time** and both of them <= 7 days-- **"/opt/isec/ens/threatprevention/bin/isecav --getoasconfig --summary"** returns **On Access Scan** status
+- **"/opt/McAfee/ens/tp/bin/mfetpcli --listtask"** returns **Quick scan, Full scan** and both of the scans <= 7 days
+- **"/opt/McAfee/ens/tp/bin/mfetpcli --listtask"** returns **DAT and engine Update time** and both of them <= 7 days
+- **"/opt/McAfee/ens/tp/bin/mfetpcli --getoasconfig --summary"** returns **On Access Scan** status
## Sophos Antivirus for Linux
defender-for-cloud Partner Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/partner-integration.md
To learn more about the integration of vulnerability scanning tools from Qualys,
Defender for Cloud also offers vulnerability analysis for your: - SQL databases - see [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)
-* Azure Container Registry images - see [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-container-registries-usage.md)
+* Azure Container Registry images - see [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-usage.md)
## How security solutions are integrated Azure security solutions that are deployed from Defender for Cloud are automatically connected. You can also connect other security data sources, including computers running on-premises or in other clouds.
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
description: Defend your AWS resources with Microsoft Defender for Cloud
Previously updated : 03/27/2022 Last updated : 05/03/2022 zone_pivot_groups: connect-aws-accounts
This screenshot shows AWS accounts displayed in Defender for Cloud's [overview d
- **To enable the Defender for Servers plan**, you'll need:
- - Microsoft Defender for Servers enabled on your subscription. Learn how to enable plans in the [Enable enhanced security features](enable-enhanced-security.md) article.
+ - Microsoft Defender for Servers enabled on your subscription. Learn how to enable plans in [Enable enhanced security features](enable-enhanced-security.md).
- An active AWS account, with EC2 instances. - Azure Arc for servers installed on your EC2 instances.
- - (Recommended) Use the auto provisioning process to install Azure Arc on all of your existing, and future EC2 instances managed by AWS Systems Manager (SSM) and using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent pre-installed. If that is the case, their AMI's are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you will need to install it using either of the following relevant instructions from Amazon:
+ - (Recommended) Use the auto provisioning process to install Azure Arc on all of your existing and future EC2 instances.
+
+ Auto provisioning is managed by AWS Systems Manager (SSM) using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent pre-installed. If that is the case, their AMI's are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you will need to install it using either of the following relevant instructions from Amazon:
- [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html) - [Install SSM Agent for a hybrid environment (Linux)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html) > [!NOTE] > To enable the Azure Arc auto-provisioning, you'll need an **Owner** permission on the relevant Azure subscription.
- - To manually install Azure Arc on your existing and future EC2 instances, follow the instructions in the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation.
+ - If you want to manually install Azure Arc on your existing and future EC2 instances, use the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation to identify instances that do not have Azure Arc installed.
- - Additional extensions should be enabled on the Arc-connected machines. These extensions are currently configured in the subscription level. It means that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regards to these components.
+ - Additional extensions should be enabled on the Arc-connected machines.
- Microsoft Defender for Endpoint - VA solution (TVM/ Qualys) - Log Analytics (LA) agent on Arc machines. Ensure the selected workspace has security solution installed.
+
+ The LA agent is currently configured in the subscription level, such that all the multi-cloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regards to the LA agent.
+
+ Learn how to [configure auto-provisioning on your subscription](enable-data-collection.md#configure-auto-provisioning-for-agents-and-extensions-from-microsoft-defender-for-cloud).
-> [!Note]
-> Without the Arc agent, you will be unable to take advantage of Defender for server's value. The Arc agent can also be installed manually, and not by the auto-provisioning process.
+ > [!NOTE]
+ > Defender for Servers assigns tags to your AWS resources to manage the auto-provisioning process. You must have these tags properly assigned to your resources so that Defender for Cloud can manage your resources:
+ **AccountId**, **Cloud**, **InstanceId**, **MDFCSecurityConnector**
## Connect your AWS account
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
To have full visibility to Microsoft Defender for Servers security content, ensu
- **Manual installation** - You can manually connect your VM instances to Azure Arc for servers. Instances in projects with Defender for Servers plan enabled that are not connected to Arc will be surfaced by the recommendation ΓÇ£GCP VM instances should be connected to Azure ArcΓÇ¥. Use the ΓÇ£FixΓÇ¥ option offered in this recommendation to install Azure Arc on the selected machines. -- The following extensions should be enabled on the Arc-connected machines according to your needs:
+- Additional extensions should be enabled on the Arc-connected machines.
- Microsoft Defender for Endpoint - VA solution (TVM/ Qualys) - Log Analytics (LA) agent on Arc machines. Ensure the selected workspace has security solution installed.
+
+ The LA agent is currently configured in the subscription level, such that all the multi-cloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regards to the LA agent.
+
+ Learn how to [configure auto-provisioning on your subscription](enable-data-collection.md#configure-auto-provisioning-for-agents-and-extensions-from-microsoft-defender-for-cloud).
- These extensions are currently configured as auto-provisioning settings on the subscription level. All GCP projects and AWS accounts under this subscription will inherit the subscription settings. Learn how to [configure auto-provisioning on your subscription](enable-data-collection.md#configure-auto-provisioning-for-agents-and-extensions-from-microsoft-defender-for-cloud).
+ > [!NOTE]
+ > Defender for Servers assigns tags to your GCP resources to manage the auto-provisioning process. You must have these tags properly assigned to your resources so that Defender for Cloud can manage your resources:
+ **Cloud**, **InstanceName**, **MDFCSecurityConnector**, **MachineId**, **ProjectId**, **ProjectNumber**
**To configure the Servers plan**:
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
It's likely that this change will impact your secure scores. For most subscripti
### Azure Defender for container registries now scans for vulnerabilities in registries protected with Azure Private Link
-Azure Defender for container registries includes a vulnerability scanner to scan images in your Azure Container Registry registries. Learn how to scan your registries and remediate findings in [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-container-registries-usage.md).
+Azure Defender for container registries includes a vulnerability scanner to scan images in your Azure Container Registry registries. Learn how to scan your registries and remediate findings in [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-containers-usage.md).
To limit access to a registry hosted in Azure Container Registry, assign virtual network private IP addresses to the registry endpoints and use Azure Private Link as explained in [Connect privately to an Azure container registry using Azure Private Link](../container-registry/container-registry-private-link.md).
The new vulnerability scanning feature for container images, utilizing Trivy, he
Container scan reports are summarized in Azure Security Center, providing security teams better insight and understanding about the source of vulnerable container images and the workflows and repositories from where they originate.
-Learn more in [Identify vulnerable container images in your CI/CD workflows](defender-for-container-registries-cicd.md).
+Learn more in [Identify vulnerable container images in your CI/CD workflows](defender-for-containers-cicd.md).
### More Resource Graph queries available for some recommendations
Learn more about Security Center's vulnerability scanners:
- [Azure Defender's integrated Qualys vulnerability scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md) - [Azure Defender's integrated vulnerability assessment scanner for SQL servers](defender-for-sql-on-machines-vulnerability-assessment.md)-- [Azure Defender's integrated vulnerability assessment scanner for container registries](defender-for-container-registries-usage.md)
+- [Azure Defender's integrated vulnerability assessment scanner for container registries](defender-for-containers-usage.md)
### SQL data classification recommendation severity changed
New vulnerabilities are discovered every day. With this update, container images
Scanning is charged on a per image basis, so there's no additional charge for these rescans.
-Learn more about this scanner in [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-container-registries-usage.md).
+Learn more about this scanner in [Use Azure Defender for container registries to scan your images for vulnerabilities](defender-for-containers-usage.md).
### Use Azure Defender for Kubernetes to protect hybrid and multi-cloud Kubernetes deployments (in preview)
This option is available from the recommendations details pages for:
- **Vulnerabilities in Azure Container Registry images should be remediated** - **Vulnerabilities in your virtual machines should be remediated**
-Learn more in [Disable specific findings for your container images](defender-for-container-registries-usage.md#disable-specific-findings) and [Disable specific findings for your virtual machines](remediate-vulnerability-findings-vm.md#disable-specific-findings).
+Learn more in [Disable specific findings for your container images](defender-for-containers-usage.md#disable-specific-findings) and [Disable specific findings for your virtual machines](remediate-vulnerability-findings-vm.md#disable-specific-findings).
### Exempt a resource from a recommendation
The security findings are now available for export through continuous export whe
Related pages: - [Security Center's integrated Qualys vulnerability assessment solution for Azure virtual machines](deploy-vulnerability-assessment-vm.md)-- [Security Center's integrated vulnerability assessment solution for Azure Container Registry images](defender-for-container-registries-usage.md)
+- [Security Center's integrated vulnerability assessment solution for Azure Container Registry images](defender-for-containers-usage.md)
- [Continuous export](continuous-export.md) ### Prevent security misconfigurations by enforcing recommendations when creating new resources
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Changes in our roadmap and priorities have removed the need for the network traf
Defender for Container's image scan now supports Windows images that are hosted in Azure Container Registry. This feature is free while in preview, and will incur a cost when it becomes generally available.
-Learn more in [Use Microsoft Defender for Container to scan your images for vulnerabilities](defender-for-container-registries-usage.md).
+Learn more in [Use Microsoft Defender for Container to scan your images for vulnerabilities](defender-for-containers-usage.md).
### New alert for Microsoft Defender for Storage (preview)
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
The **tabs** below show the features that are available, by environment, for Mic
| Domain | Feature | Supported Resources | Release state <sup>[1](#footnote1)</sup> | Windows support | Agentless/Agent-based | Pricing Tier | Azure clouds availability | |--|--|--|--|--|--|--|--|
-| Compliance | Docker CIS | VMs | GA | X | Log Analytics agent | Defender for Servers Plan 2 | |
+| Compliance | Docker CIS | VM, VMSS | GA | X | Log Analytics agent | Defender for Servers Plan 2 | |
| Vulnerability Assessment | Registry scan | ACR, Private ACR | GA | Γ£ô (Preview) | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Vulnerability Assessment | View vulnerabilities for running images | AKS | Preview | Γ£ô (Preview) | Defender profile | Defender for Containers | Commercial clouds | | Hardening | Control plane recommendations | ACR, AKS | GA | Γ£ô | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
defender-for-cloud Supported Machines Endpoint Solutions Clouds Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-servers.md
For information about when recommendations are generated for each of these solut
| - [Microsoft Defender for App Service](./defender-for-app-service-introduction.md) | GA | Not Available | Not Available | | - [Microsoft Defender for DNS](./defender-for-dns-introduction.md) | GA | GA | GA | | - [Microsoft Defender for container registries](./defender-for-container-registries-introduction.md) <sup>[1](#footnote1)</sup> | GA | GA <sup>[2](#footnote2)</sup> | GA <sup>[2](#footnote2)</sup> |
-| - [Microsoft Defender for container registries scanning of images in CI/CD workflows](./defender-for-container-registries-cicd.md) <sup>[3](#footnote3)</sup> | Public Preview | Not Available | Not Available |
+| - [Microsoft Defender for container registries scanning of images in CI/CD workflows](./defender-for-containers-cicd.md) <sup>[3](#footnote3)</sup> | Public Preview | Not Available | Not Available |
| - [Microsoft Defender for Kubernetes](./defender-for-kubernetes-introduction.md) <sup>[4](#footnote4)</sup> | GA | GA | GA | | - [Microsoft Defender for Containers](./defender-for-containers-introduction.md) <sup>[10](#footnote4)</sup> | GA | GA | GA | | - [Defender extension for Azure Arc-enabled Kubernetes clusters, servers or data services](./defender-for-kubernetes-azure-arc.md) <sup>[5](#footnote5)</sup> | Public Preview | Not Available | Not Available |
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 05/02/2022 Last updated : 05/10/2022 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| [Changes to vulnerability assessment](#changes-to-vulnerability-assessment) | May 2022 | | [Key Vault recommendations changed to "audit"](#key-vault-recommendations-changed-to-audit) | May 2022 | | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | June 2022 |
+| [Deprecating three VM alerts](#deprecating-three-vm-alerts) | June 2022|
### Changes to recommendations for managing endpoint protection solutions
These accounts can be targets for attackers looking to find ways to access your
| Description | User accounts that have been blocked from signing in, should be removed from your subscriptions. <br> These accounts can be targets for attackers looking to find ways to access your data without being noticed. | User accounts that have been blocked from signing into Active Directory, should be removed from your subscriptions. These accounts can be targets for attackers looking to find ways to access your data without being noticed. <br> Learn more about securing the identity perimeter in [Azure Identity Management and access control security best practices](../security/fundamentals/identity-management-best-practices.md). | | Related policy | [Deprecated accounts should be removed from your subscription](https://ms.portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f6b1cbf55-e8b6-442f-ba4c-7246b6381474) | Subscriptions should be purged of accounts that are blocked in Active Directory and have read and write permissions. |
+### Deprecating three VM alerts
+
+**Estimated date for change:** June 2022
+
+The following table lists the alerts that will be deprecated during June 2022.
+
+| Alert name | Description | Tactocs | Severity |
+|--|--|--|--|
+| **Docker build operation detected on a Kubernetes node** <br>(VM_ImageBuildOnNode) | Machine logs indicate a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | Defense Evasion | Low |
+| **Suspicious request to Kubernetes API** <br>(VM_KubernetesAPI) | Machine logs indicate that a suspicious request was made to the Kubernetes API. The request was sent from a Kubernetes node, possibly from one of the containers running in the node. Although this behavior can be intentional, it might indicate that the node is running a compromised container. | LateralMovement | Medium |
+| **SSH server is running inside a container** <br>(VM_ContainerSSH) | Machine logs indicate that an SSH server is running inside a Docker container. While this behavior can be intentional, it frequently indicates that a container is misconfigured or breached. | Execution | Medium |
+
+These alerts are used to notify a user about suspicious activity connected to a Kubernetes cluster. The alerts will be replaced with matching alerts that are part of the Microsoft Defender for Cloud Container alerts (`K8S.NODE_ImageBuildOnNode`, `K8S.NODE_ KubernetesAPI` and `K8S.NODE_ ContainerSSH`) which will provide improved fidelity and comprehensive context to investigate and act on the alerts. Learn more about alerts for [Kubernetes Clusters](alerts-reference.md).
+ ## Next steps For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md)
digital-twins Quickstart Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/quickstart-azure-digital-twins-explorer.md
You'll also need to download the materials for the sample graph used in the quic
The first step in working with Azure Digital Twins is to create an Azure Digital Twins instance. After you create an instance of the service, you can connect to the instance in Azure Digital Twins Explorer, which you'll use to work with the instance throughout the quickstart.
-The rest of this section walks you through these steps.
+The rest of this section walks you through the instance creation.
### Create an Azure Digital Twins instance
In this quickstart, you made the temperature update manually. It's common in Azu
## Clean up resources
-To clean up after this quickstart, choose which resources you want to remove based on what you want to do next.
+To clean up after this quickstart, choose which Azure Digital Twins resources you want to remove, based on what you want to do next.
* If you plan to continue to the Azure Digital Twins tutorials, you can reuse the instance in this quickstart for those articles, and you don't need to remove it. [!INCLUDE [digital-twins-cleanup-clear-instance.md](../../includes/digital-twins-cleanup-clear-instance.md)]
-* If you don't need your Azure Digital Twins instance anymore, you can delete it using the Azure portal.
+* If you don't need your Azure Digital Twins instance anymore, you can delete it using the [Azure portal](https://portal.azure.com).
Navigate back to the instance's **Overview** page in the portal. (If you've already closed that tab, you can find the instance again by searching for its name in the Azure portal search bar and selecting it from the search results.)
event-hubs Event Hubs Geo Dr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-geo-dr.md
Title: Geo-disaster recovery - Azure Event Hubs| Microsoft Docs description: How to use geographical regions to fail over and perform disaster recovery in Azure Event Hubs Previously updated : 06/21/2021 Last updated : 05/10/2022 # Azure Event Hubs - Geo-disaster recovery
When creating a namespace, you see the following highlighted message when you se
:::image type="content" source="./media/event-hubs-geo-dr/eh-az.png" alt-text="Image showing the Create Namespace page with region that has availability zones":::
+> [!NOTE]
+> When you use the Azure portal, zone redundancy via support for availability zones is automatically enabled. You can't disable it in the portal. You can use the Azure CLI command [`az eventhubs namespace`](/cli/azure/eventhubs/namespace#az-eventhubs-namespace-create) with `--zone-redundant=false` or use the PowerShell command [`New-AzEventHubNamespace`](/powershell/module/az.eventhub/new-azeventhubnamespace) with `-ZoneRedundant=false` to create a namespace with zone redundancy disabled.
+ ## Private endpoints This section provides more considerations when using Geo-disaster recovery with namespaces that use private endpoints. To learn about using private endpoints with Event Hubs in general, see [Configure private endpoints](private-link-service.md).
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
If you are remote and do not have fiber connectivity or you want to explore othe
| **[ThinkTel](https://www.thinktel.ca/services/agile-ix-data/expressroute/)** | Equinix | Toronto | | **[United Information Highway (UIH)](https://www.uih.co.th/en/internet-solution/cloud-direct/uih-cloud-direct-for-microsoft-azure-expressroute)**| Equinix | Singapore | | **[Venha Pra Nuvem](https://venhapranuvem.com.br/)** | Equinix | Sao Paulo |
-| **[Webair](https://www.webair.com/microsoft-express-route-partnership/)**| Megaport | New York |
+| **[Webair](https://opti9tech.com/partners/)**| Megaport | New York |
| **[Windstream](https://www.windstreambusiness.com/solutions/cloud-services/cloud-and-managed-hosting-services)**| Equinix | Chicago, Silicon Valley, Washington DC | | **[X2nsat Inc.](https://www.x2nsat.com/expressroute/)** |Coresite |Silicon Valley, Silicon Valley 2| | **Zain** |Equinix |London|
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
Metrics explorer supports SUM, MAX, MIN, AVG and COUNT as [aggregation types](..
| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? | | | | | | | | |
+| [Bits received per second](#gwbits) | Performance | BitsPerSecond | Average | Total bits received on ExpressRoute gateway per second | roleInstance | No |
| [CPU utilization](#cpu) | Performance | Count | Average | CPU Utilization of the ExpressRoute Gateway | roleInstance | Yes |
-| [Packets per second](#packets) | Performance | CountPerSecond | Average | Packet count of ExpressRoute Gateway | roleInstance | No |
+| [Packets per second](#packets) | Performance | CountPerSecond | Average | Total Packets received on ExpressRoute Gateway per second | roleInstance | No |
| [Count of routes advertised to peer](#advertisedroutes) | Availability | Count | Maximum | Count Of Routes Advertised To Peer by ExpressRouteGateway | roleInstance | Yes | | [Count of routes learned from peer](#learnedroutes)| Availability | Count | Maximum | Count Of Routes Learned From Peer by ExpressRouteGateway | roleInstance | Yes | | [Frequency of routes changed](#frequency) | Availability | Count | Total | Frequency of Routes change in ExpressRoute Gateway | roleInstance | No |
Metrics explorer supports SUM, MAX, MIN, AVG and COUNT as [aggregation types](..
| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? | | | | | | | | |
-| [BitsInPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | ConnectionName | No |
-| [BitsOutPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | ConnectionName | No |
+| [BitsInPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second through ExpressRoute gateway | ConnectionName | No |
+| [BitsOutPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second through ExpressRoute gateway | ConnectionName | No |
| DroppedInBitsPerSecond | Traffic | BitsPerSecond | Average | Ingress bits of data dropped per second | ConnectionName | Yes | | DroppedOutBitsPerSecond | Traffic | BitPerSecond | Average | Egress bits of data dropped per second | ConnectionName | Yes |
Aggregation type: *Avg*
When you deploy an ExpressRoute gateway, Azure manages the compute and functions of your gateway. There are six gateway metrics available to you to better understand the performance of your gateway:
+* Bits received per second
* CPU Utilization * Packets per seconds * Count of routes advertised to peers
When you deploy an ExpressRoute gateway, Azure manages the compute and functions
It's highly recommended you set alerts for each of these metrics so that you are aware of when your gateway could be seeing performance issues.
-### <a name = "cpu"></a>CPU Utilization - Split Instance
+### <a name = "gwbits"></a>Bits received per second - Split by instance
+
+Aggregation type: *Avg*
+
+This metric captures inbound bandwidth utilization on the ExpressRoute virtual network gateway instances. Set an alert for how frequent the bandwidth utilization exceeds a certain threshold. If you need more bandwidth, increase the size of the ExpressRoute virtual network gateway.
++
+### <a name = "cpu"></a>CPU Utilization - Split by instance
Aggregation type: *Avg*
You can view the CPU utilization of each gateway instance. The CPU utilization m
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/cpu-split.jpg" alt-text="Screenshot of CPU utilization - split metrics.":::
-### <a name = "packets"></a>Packets Per Second - Split by Instance
+### <a name = "packets"></a>Packets Per Second - Split by instance
Aggregation type: *Avg*
This metric captures the number of inbound packets traversing the ExpressRoute g
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/pps-split.jpg" alt-text="Screenshot of packets per second - split metrics.":::
-### <a name = "advertisedroutes"></a>Count of Routes Advertised to Peer - Split by Instance
+### <a name = "advertisedroutes"></a>Count of Routes Advertised to Peer - Split by instance
Aggregation type: *Count*
This metric is the count for the number of routes the ExpressRoute gateway is ad
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/count-of-routes-advertised-to-peer.png" alt-text="Screenshot of count of routes advertised to peer.":::
-### <a name = "learnedroutes"></a>Count of Routes Learned from Peer - Split by Instance
+### <a name = "learnedroutes"></a>Count of Routes Learned from Peer - Split by instance
Aggregation type: *Max*
This metric shows the number of routes the ExpressRoute gateway is learning from
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/count-of-routes-learned-from-peer.png" alt-text="Screenshot of count of routes learned from peer.":::
-### <a name = "frequency"></a>Frequency of Routes change - Split by Instance
+### <a name = "frequency"></a>Frequency of Routes change - Split by instance
Aggregation type: *Sum*
This metric shows the number of virtual machines that are using the ExpressRoute
Aggregation type: *Avg*
-This metric shows the bandwidth usage for a specific connection to an ExpressRoute circuit.
+This metric shows the bits per second for ingress and egress to Azure through the ExpressRoute gateway. You can split this metric further to see specific connections to the ExpressRoute circuit.
:::image type="content" source="./media/expressroute-monitoring-metrics-alerts/erconnections.jpg" alt-text="Screenshot of gateway connection bandwidth usage metric.":::
governance Guest Configuration Create Signing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/guest-configuration-create-signing.md
require code to be signed.
To use the Signature Validation feature, run the `Protect-GuestConfigurationPackage` cmdlet to sign the package before it's
-published. This cmdlet requires a 'Code Signing' certificate.
+published. This cmdlet requires a 'Code Signing' certificate. If you do not have a 'Code Signing' certificate, please use the script below to create a self-signed certificate for testing purposes to follow along with the example.
+
+## Windows signature validation
```azurepowershell-interactive
-$Cert = Get-ChildItem -Path cert:\LocalMachine\My | Where-Object {($_.Subject-eq "CN=mycert") }
-Protect-GuestConfigurationPackage -Path .\package\AuditWindowsService\AuditWindowsService.zip -Certificate $Cert -Verbose
+# How to create a self sign cert and use it to sign Guest Configuration custom policy package
+
+# Create Code signing cert
+$mycert = New-SelfSignedCertificate -Type CodeSigningCert -DnsName 'GCEncryptionCertificate' -HashAlgorithm SHA256
+
+# Export the certificates
+$mypwd = ConvertTo-SecureString -String "Password1234" -Force -AsPlainText
+$mycert | Export-PfxCertificate -FilePath C:\demo\GCPrivateKey.pfx -Password $mypwd
+$mycert | Export-Certificate -FilePath "C:\demo\GCPublicKey.cer" -Force
+
+# Import the certificate
+Import-PfxCertificate -FilePath C:\demo\GCPrivateKey.pfx -Password $mypwd -CertStoreLocation 'Cert:\LocalMachine\My'
++
+# Sign the policy package
+$certToSignThePackage = Get-ChildItem -Path cert:\LocalMachine\My | Where-Object {($_.Subject-eq "CN=GCEncryptionCertificate") }
+Protect-GuestConfigurationPackage -Path C:\demo\AuditWindowsService.zip -Certificate $certToSignThePackage -Verbose
+```
+
+## Linux signature validation
+
+```bash
+# generate gpg key
+gpg --gen-key
+
+# export public key
+gpg --output public.gpg --export <email-id used to generate gpg key>
+# export private key
+gpg --output private.gpg --export-secret-key <email-id used to generate gpg key>
+
+# Sign linux policy package
+Import-Module GuestConfiguration
+Protect-GuestConfigurationPackage -Path ./not_installed_application_linux.zip -PrivateGpgKeyPath ./private.gpg -PublicGpgKeyPath ./public.gpg -Verbose
``` Parameters of the `Protect-GuestConfigurationPackage` cmdlet:
Parameters of the `Protect-GuestConfigurationPackage` cmdlet:
GuestConfiguration agent expects the certificate public key to be present in "Trusted Root Certificate Authorities" on Windows machines and in the path
-`/usr/local/share/ca-certificates/extra` on Linux machines. For the node to
+`/usr/local/share/ca-certificates/gc` on Linux machines. For the node to
verify signed content, install the certificate public key on the machine before applying the custom policy. This process can be done using any technique inside the VM or by using Azure Policy. An example template is available
governance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/advanced.md
az graph query -q "Resources | where type == 'microsoft.compute/virtualmachines'
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell-interactive
-Search-AzGraph -Query "Resources | where type == 'microsoft.compute/virtualmachines' | extend JoinID = toupper(id), OSName = tostring(properties.osProfile.computerName), OSType = tostring(properties.storageProfile.osDisk.osType), VMSize = tostring(properties.hardwareProfile.vmSize) | join kind=leftouter( Resources | where type == 'microsoft.compute/virtualmachines/extensions' | extend VMId = toupper(substring(id, 0, indexof(id, '/extensions'))), ExtensionName = name ) on $left.JoinID == $right.VMId | summarize Extensions = make_list(ExtensionName) by id, OSName, OSType, VMSize | order by tolower(OSName) asc"
+Search-AzGraph -Query "Resources | where type == 'microsoft.compute/virtualmachines' | extend JoinID = toupper(id), OSName = tostring(properties.osProfile.computerName), OSType = tostring(properties.storageProfile.osDisk.osType), VMSize = tostring(properties.hardwareProfile.vmSize) | join kind=leftouter( Resources | where type == 'microsoft.compute/virtualmachines/extensions' | extend VMId = toupper(substring(id, 0, indexof(id, '/extensions'))), ExtensionName = name ) on `$left.JoinID == `$right.VMId | summarize Extensions = make_list(ExtensionName) by id, OSName, OSType, VMSize | order by tolower(OSName) asc"
``` # [Portal](#tab/azure-portal)
hdinsight Troubleshoot Data Retention Issues Expired Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/troubleshoot-data-retention-issues-expired-data.md
+
+ Title: Troubleshoot data retention (TTL) issues with expired data not being deleted from storage on Azure HDInsight
+description: Troubleshoot various data-retention (TTL) issues with expired data not being deleted from storage on Azure HDInsight
++ Last updated : 05/06/2022++
+# Troubleshoot data retention (TTL) issues with expired data not being deleted from storage on Azure HDInsight
+
+In HBase cluster, you may decide that you would like to remove data after it ages either to free some storage and save on costs as the older data is no longer needed, either to comply with regulations. When that is needed, you'll usually set TTL in a table at the ColumnFamily level to expire and automatically delete older data. While TTL can be set as well at cell level, setting it at ColumnFamily level is usually a more convenient option because the ease of administration and because a cell TTL (expressed in ms) can't extend the effective lifetime of a cell beyond a ColumnFamily level TTL setting (expressed in seconds), so only required shorter retention times at cell level could benefit from setting cell level TTL.
+
+Despite setting TTL, you may notice sometimes that you don't obtain the desired effect, i.e. some data hasn't expired and/or storage size hasn't decreased.
+
+## Prerequisites
+
+To prepare to follow the steps and commands below, open two ssh connections to HBase cluster:
+* In one of the ssh sessions keep the default bash shell.
+* In the second ssh session launch HBase shell by running the command below.
+
+ ```
+ hbase shell
+ ```
+
+### Check if desired TTL is configured and if expired data is removed from query result
+
+Follow the steps below to understand where is the issue. Start by checking if the behavior occurs for a specific table or for all the tables. If you're unsure whether the issue impacts all the tables or a specific table, just consider as example a specific table name for the start.
+
+1. Check first that TTL has been configured for ColumnFamily for the target tables. Run the command below in the ssh session where you launched HBase shell and observe example and output below. One column family has TTL set to 50 seconds, the other ColumnFamily has no value configured for TTL, thus it appears as "FOREVER" (data in this column family isn't configured to expire).
+
+ ```
+ describe 'table_name'
+ ```
+
+1. If not configured, default TTL is set to 'FOREVER'. There are two possibilities why data is not expired as expected and removed from query result.
+ 1. If TTL has any other value then 'FOREVER', observe the value for column family and note down the value in seconds(pay special attention to value correlated with the unit measure as cell TTL is in ms, but column family TTL is in seconds) to confirm if it is the expected one. If the observed value isn't correct, fix that first.
+ 1. If TTL value is 'FOREVER' for all column families, configure TTL as first step and afterwards monitor if data is expired as expected.
+1. If you establish that TTL is configured and has the correct value for the ColumnFamily, next step is to confirm that the expired data no longer shows up when doing table scans. When data expires, it should be removed and not show up in the scan table results. Run the below command in HBase shell to check.
+ ```
+ scan 'table_name'
+ ```
+### Check the number and size of StoreFiles per table per region to observe if any changes are visible after the compaction operation
+
+1. Before moving to next step, from ssh session with bash shell, run the following command to check the current number of StoreFiles and size for each StoreFile currently showing up for the ColumnFamily for which the TTL has been configured. Note first the table and ColumnFamily for which you'll be doing the check, then run the following command in ssh session (bash).
+
+ ```
+ hdfs dfs -ls -R /hbase/data/default/table_name/ | grep "column_family_name"
+ ```
+1. Likely, there will be more results shown in the output, one result for each region ID that is part of the table and between 0 and more results for StoreFiles present under each region name, for the selected ColumnFamily. To count the overall number of rows in the result output above, run the following command.
+
+ ```
+ hdfs dfs -ls -R /hbase/data/default/table_name/ | grep "column_family_name" | wc -l
+ ```
+
+### Check the number and size of StoreFiles per table per region after flush
+
+1. Based on the TTL configured for each ColumnFamily and how much data is written in the table for the target ColumnFamily, part of the data may still exist in MemStore and isn't written as StoreFile to storage. Thus, to make sure that the data is written to storage as StoreFile, before the maximum configured MemStore size is reached, you can run the following command in HBase shell to write data from MemStore to StoreFile immediately.
+ ```
+ flush 'table_name'
+ ```
+
+1. Observe the result by running again in bash shell the command.
+
+ ```
+ hdfs dfs -ls -R /hbase/data/default/table_name/ | grep "column_family_name"
+ ```
+
+1. An additional store file is created compared to previous result output for each region where data is modified, the StoreFile will include current content of MemStore for that region.
+
+### Check the number and size of StoreFiles per table per region after major compaction
+
+1. At this point, the data from MemStore has been written to StoreFile, in storage, but expired data may still exist in one or more of the current StoreFiles. Although minor compactions can help delete some of the expired entries, it isn't guaranteed that it will remove all of them as minor compaction will usually not select all the StoreFiles for compaction, while major compaction will select all the StoreFiles for compaction in that region.
+
+ Also, there's another situation when minor compaction may not remove cells with TTL expired. There's a property named MIN_VERSIONS and it defaults to 0 only (see in the above output from describe 'table_name' the property MIN_VERSIONS=>'0'). If this property is set to 0, the minor compaction will remove the cells with TTL expired. If this value is greater than 0, minor compaction may not remove the cells with TTL expired even if it touches the corresponding file as part of compaction. This property configures the min number of versions of a cell to keep, even if those versions have TTL expired.
+
+1. To make sure expired data is also deleted from storage, we need to run a major compaction operation. The major compaction operation, when completed, will leave behind a single StoreFile per region. In HBase shell, run the command to execute a major compaction operation on the table:
+ ```
+ major_compact 'table_name'
+ ```
+
+1. Depending on the table size, major compaction operation can take some time. Use the command below in HBase shell to monitor progress. If the compaction is still running when you execute the command below, you'll see the output "MAJOR", but if the compaction is completed, you will see the output "NONE".
+ ```
+ compaction_state 'table_name'
+ ```
+
+1. When the compaction status appears as "NONE" in hbase shell, if you switch quickly to bash and run command
+ ```
+ hdfs dfs -ls -R /hbase/data/default/table_name/ | grep "column_family_name"
+ ```
+1. You will notice that an extra StoreFile has been created in addition to previous ones per region per ColumnFamily and after several moments only the last created StoreFile is kept per region per column family.
+
+For the example region above, once the extra moments elapse, we can notice that one single StoreFile remained and the size occupied by this file on the storage is reduced as major compaction occurred and at this point any expired data that has not been deleted before(by another major compaction), will be deleted after running current major compaction operation.
+
+> [!NOTE]
+> For this troubleshooting exercise we triggered the major compaction manually. But in practice, doing that manually for many tables might be time consuming. By default, major compaction is disabled on HDInsight cluster. The main reason for keeping major compaction disabled by default is because the performance of the table operations is impacted when a major compaction is in progress. However, you can enable major compaction by configuring the value for the property hbase.hregion.majorcompaction in ms or can use a cron tab job or another external system to schedule compaction at a time convenient for you, with lower workload.
+
+## Next steps
+
+If you didn't see your problem or are unable to solve your issue, visit one of the following channels for more support:
+
+* Get answers from Azure experts through [Azure Community Support](https://azure.microsoft.com/support/community/).
+
+* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience. Connecting the Azure community to the right resources: answers, support, and experts.
+
+* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
hdinsight Hive Llap Sizing Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/hive-llap-sizing-guide.md
Previously updated : 05/05/2020 Last updated : 05/10/2022 # Azure HDInsight Interactive Query Cluster (Hive LLAP) sizing guide
If setting these values didn't resolve your issue, visit one of the following...
* [Configure the Hive server heap size](https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/performance-tuning/content/hive_hiveserver_heap_sizing.html) * [Map Join Memory Sizing for LLAP](https://community.cloudera.com/t5/Community-Articles/Map-Join-Memory-Sizing-For-LLAP/ta-p/247462) * [Tez Execution Engine Properties](https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/performance-tuning/content/hive_tez_engine_properties.html)
- * [Hive LLAP deep dive](https://community.cloudera.com/t5/Community-Articles/Hive-LLAP-deep-dive/ta-p/248893)
+ * [Hive LLAP deep dive](https://community.cloudera.com/t5/Community-Articles/Hive-LLAP-deep-dive/ta-p/248893)
hdinsight Apache Spark Creating Ml Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-creating-ml-pipelines.md
description: Use the Apache Spark machine learning library to create data pipeli
Previously updated : 07/22/2019 Last updated : 05/10/2022 # Create an Apache Spark machine learning pipeline
hdinsight Apache Spark Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-known-issues.md
description: Learn about issues related to Apache Spark clusters in Azure HDInsi
Previously updated : 08/15/2019 Last updated : 05/10/2022 # Known issues for Apache Spark cluster on HDInsight
hdinsight Apache Spark Troubleshoot Job Fails Invalidclassexception https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-job-fails-invalidclassexception.md
Title: InvalidClassException error from Apache Spark - Azure HDInsight
description: Apache Spark job fails with InvalidClassException, class version mismatch, in Azure HDInsight Previously updated : 07/29/2019 Last updated : 05/10/2022 # Apache Spark job fails with InvalidClassException, class version mismatch, in Azure HDInsight
Either remove the jar, or recompile the customized jar (AzureLogAppender) and us
## Next steps
hdinsight Apache Spark Use Bi Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-use-bi-tools.md
description: Tutorial - Use Microsoft Power BI to visualize Apache Spark data st
Previously updated : 04/21/2020 Last updated : 05/10/2022 #Customer intent: As a developer new to Apache Spark and to Apache Spark in Azure HDInsight, I want to learn how to virtualize Spark data in BI tools.
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
Modules built as Linux containers can be deployed to either Linux or Windows dev
| Raspberry Pi OS Stretch | | ![Raspberry Pi OS Stretch + ARM32v7](./media/support/green-check.png) | | | Ubuntu Server 20.04 | ![Ubuntu Server 20.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 20.04 + ARM64](./media/support/green-check.png) | | Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 18.04 + ARM64](./media/support/green-check.png) |
-| Windows 10 Pro | ![Windows 10 Pro + AMD64](./media/support/green-check.png) | | |
-| Windows 10 Enterprise | ![Windows 10 Enterprise + AMD64](./media/support/green-check.png) | | |
-| Windows 10 IoT Enterprise | ![Windows 10 IoT Enterprise + AMD64](./media/support/green-check.png) | | |
-| Windows Server 2019 | ![Windows Server 2019 + AMD64](./media/support/green-check.png) | | |
+| Windows 10/11 Pro | ![Windows 10/11 Pro + AMD64](./media/support/green-check.png) | | |
+| Windows 10/11 Enterprise | ![Windows 10/11 Enterprise + AMD64](./media/support/green-check.png) | | |
+| Windows 10/11 IoT Enterprise | ![Windows 10/11 IoT Enterprise + AMD64](./media/support/green-check.png) | | |
+| Windows Server 2019/2022 | ![Windows Server 2019/2022 + AMD64](./media/support/green-check.png) | | |
:::moniker-end <!-- end 1.1 -->
Modules built as Linux containers can be deployed to either Linux or Windows dev
| Raspberry Pi OS Stretch | | ![Raspberry Pi OS Stretch + ARM32v7](./media/support/green-check.png) | | | Ubuntu Server 20.04 | ![Ubuntu Server 20.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 20.04 + ARM64](./media/support/green-check.png) | | Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 18.04 + ARM64](./media/support/green-check.png) |
-| Windows 10 Pro | ![Windows 10 Pro + AMD64](./media/support/green-check.png) | | ![Win 10 Pro + ARM64](./media/support/green-check.png)<sup>1</sup> |
-| Windows 10 Enterprise | ![Windows 10 Enterprise + AMD64](./media/support/green-check.png) | | ![Win 10 Enterprise + ARM64](./media/support/green-check.png)<sup>1</sup> |
-| Windows 10 IoT Enterprise | ![Windows 10 IoT Enterprise + AMD64](./media/support/green-check.png) | | ![Win 10 IoT Enterprise + ARM64](./media/support/green-check.png)<sup>1</sup> |
-| Windows Server 2019 | ![Windows Server 2019 + AMD64](./media/support/green-check.png) | | |
+| Windows 10/11 Pro | ![Windows 10/11 Pro + AMD64](./media/support/green-check.png) | | ![Win 10 Pro + ARM64](./media/support/green-check.png)<sup>1</sup> |
+| Windows 10/11 Enterprise | ![Windows 10/11 Enterprise + AMD64](./media/support/green-check.png) | | ![Win 10 Enterprise + ARM64](./media/support/green-check.png)<sup>1</sup> |
+| Windows 10/11 IoT Enterprise | ![Windows 10/11 IoT Enterprise + AMD64](./media/support/green-check.png) | | ![Win 10 IoT Enterprise + ARM64](./media/support/green-check.png)<sup>1</sup> |
+| Windows Server 2019/2022 | ![Windows Server 2019/2022 + AMD64](./media/support/green-check.png) | | |
<sup>1</sup> Support for this platform using IoT Edge for Linux on Windows is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). :::moniker-end
iot-hub Iot Hub Rm Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-rest.md
To complete this tutorial, you need the following:
3. In NuGet Package Manager, check **Include prerelease**, and on the **Browse** page search for **Microsoft.Azure.Management.ResourceManager**. Select the package, click **Install**, in **Review Changes** click **OK**, then click **I Accept** to accept the licenses. 4. In NuGet Package Manager, search for **Microsoft.IdentityModel.Clients.ActiveDirectory**. Click **Install**, in **Review Changes** click **OK**, then click **I Accept** to accept the license.
+ > [!IMPORTANT]
+ > The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](/azure/active-directory/develop/msal-migration) for more details.
5. In Program.cs, replace the existing **using** statements with the following code:
iot-hub Iot Hub Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-template.md
To complete this tutorial, you need the following:
3. In NuGet Package Manager, check **Include prerelease**, and on the **Browse** page search for **Microsoft.Azure.Management.ResourceManager**. Select the package, click **Install**, in **Review Changes** click **OK**, then click **I Accept** to accept the licenses. 4. In NuGet Package Manager, search for **Microsoft.IdentityModel.Clients.ActiveDirectory**. Click **Install**, in **Review Changes** click **OK**, then click **I Accept** to accept the license.
+ > [!IMPORTANT]
+ > The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](/azure/active-directory/develop/msal-migration) for more details.
5. In Program.cs, replace the existing **using** statements with the following code:
load-balancer Tutorial Multi Availability Sets Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-multi-availability-sets-portal.md
Previously updated : 08/12/2021 Last updated : 05/09/2022 # Tutorial: Create a load balancer with more than one availability set in the backend pool using the Azure portal
In this section, you'll create a virtual network for the load balancer and the o
6. Select the **IP addresses** tab, or the **Next: IP Addresses** button at the bottom of the page.
-7. In the **IP addresses** tab, under **Subnet name** select **default**.
+7. In the **IP Addresses** tab, enter this information:
-8. In the **Edit subnet** pane, under **Subnet name** enter **myBackendSubnet**.
+ | Setting | Value |
+ |--|-|
+ | IPv4 address space | Enter **10.1.0.0/16** |
-9. Select **Save**.
+8. Select **+ Add subnet**.
-10. Select the **Security** tab, or the **Next: Security** button at the bottom of the page.
+9. In **Add subnet**, enter this information:
-11. In the **Security** tab, in **BastionHost** select **Enable**.
+ | Setting | Value |
+ |--|-|
+ | Subnet name | Enter **myBackendSubnet** |
+ | Subnet address range | Enter **10.1.0.0/24** |
-12. Enter or select the following information:
+10. Select **Add**.
+
+11. Select the **Security** tab, or the **Next: Security** button at the bottom of the page.
+
+12. In the **Security** tab, in **BastionHost** select **Enable**.
+
+13. Enter or select the following information:
| Setting | Value | | - | -- |
In this section, you'll create a virtual network for the load balancer and the o
| AzureBastionSubnet address space | Enter **10.1.1.0/27**. | | Public IP address | Select **Create new**. </br> Enter **myBastionIP** in **Name**. |
-13. Select the **Review + create** tab, or the blue **Review + create** button at the bottom of the page.
+14. Select the **Review + create** tab, or the blue **Review + create** button at the bottom of the page.
-14. Select **Create**.
+15. Select **Create**.
## Create NAT gateway
In this section, you'll create a load balancer for the virtual machines.
| SKU | Leave the default **Standard**. | | Tier | Leave the default **Regional**. |
-4. Select **Next: Frontend IP configuration** at the bottom of the page.
+4. Select the **Frontend IP configuration** tab, or select the **Next: Frontend IP configuration** button at the bottom of the page.
-5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
+5. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**.
6. Enter **LoadBalancerFrontend** in **Name**.
In this section, you'll create a load balancer for the virtual machines.
11. Select **Zone-redundant** in **Availability zone**. > [!NOTE]
- > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
+ > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
12. Leave the default of **Microsoft Network** for **Routing preference**.
In this section, you'll create a load balancer for the virtual machines.
14. Select **Add**.
-15. Select **Next: Backend pools** at the bottom of the page.
+15. Select the **Backend pools** tab, or select the **Next: Backend pools** button at the bottom of the page.
16. In the **Backend pools** tab, select **+ Add a backend pool**.
In this section, you'll create a load balancer for the virtual machines.
21. Select **Add**.
-22. Select the **Next: Inbound rules** button at the bottom of the page.
+22. Select the **Inbound rules** tab, or select the **Next: Inbound rules** button at the bottom of the page.
23. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
In this section, you'll create a load balancer for the virtual machines.
27. Select **Create**. > [!NOTE]
- > In this example we created a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed as it's optional isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
+ > In this example we created a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed as it's optional and isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
> For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md) ## Create virtual machines
In this section, you'll create two availability groups with two virtual machines
7. Select **Create**.
-8. Repeat steps 1 through seven to create the second virtual machine of the set. Replace the settings for the VM with the following information:
+8. Repeat steps 1 through 7 to create the second virtual machine of the set. Replace the settings for the VM with the following information:
| Setting | Value | | - | -- |
In this section, you'll create two availability groups with two virtual machines
7. Select **Create**.
-8. Repeat steps 1 through seven to create the second virtual machine of the set. Replace the settings for the VM with the following information:
+8. Repeat steps 1 through 7 to create the second virtual machine of the set. Replace the settings for the VM with the following information:
| Setting | Value | | - | -- |
In this section, you'll use the Azure Bastion host you created previously to con
4. In the **Overview** page of myVM1, select **Connect** > **Bastion**.
-5. Select **Use Bastion**.
-
-6. Enter the **Username** and **Password** you created when you created the virtual machine.
+5. Enter the **Username** and **Password** you created when you created the virtual machine.
-7. Select **Connect**.
+6. Select **Connect**.
7. On the server desktop, navigate to **Windows Administrative Tools** > **Windows PowerShell**.
In this section, you'll use the Azure Bastion host you created previously to con
# Add a new htm file that displays server name Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername) ```
-8. Close the Bastion session with **myVM1**.
+9. Close the Bastion session with **myVM1**.
-9. Repeat steps 1 through eight for **myVM2**, **myVM3**, and **myVM4**.
+10. Repeat steps 1 through 8 for **myVM2**, **myVM3**, and **myVM4**.
## Test the load balancer
load-testing How To Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-customer-managed-keys.md
+
+ Title: Configure customer-managed keys for encryption
+
+description: Learn how to configure customer-managed keys for your Azure Load Testing resource with Azure Key Vault
++++ Last updated : 05/10/2022+++
+# Configure customer-managed keys for your Azure Load Testing Preview resource with Azure Key Vault
+
+Azure Load Testing Preview automatically encrypts all data stored in your load testing resource with keys that Microsoft provides (service-managed keys). Optionally, you can add a second layer of security by also providing your own (customer-managed) keys. Customer-managed keys offer greater flexibility for controlling access and using key-rotation policies.
+
+The keys you provide are stored securely using [Azure Key Vault](/azure/key-vault/general/overview). You can create a separate key for each Azure Load Testing resource you enable with customer-managed keys.
+
+Azure Load Testing uses the customer-managed key to encrypt the following data in the load testing resource:
+
+- Test script and configuration files
+- Secrets
+- Environment variables
+
+> [!IMPORTANT]
+> Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+- An existing user-assigned managed identity. For more information about creating a user-assigned managed identity, see (Manage user-assigned managed identities)[/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity].
+
+## Limitations
+
+- Customer-managed keys are only available for new Azure Load Testing resources. You should configure the key during resource creation.
+
+- Azure Load Testing cannot automatically rotate the customer-managed key to use the latest version of the encryption key. You should update the key URI in the resource after the key is rotated in the Azure Key Vault.
+
+- Once customer-managed key encryption is enabled on a resource, it cannot be disabled.
+
+## Configure your Azure Key Vault
+You can use a new or existing key vault to store customer-managed keys. The Azure Load Testing resource and key vault may be in different regions or subscriptions in the same tenant.
+
+You have to set the **Soft Delete** and **Purge Protection** properties on your Azure Key Vault instance to use customer-managed keys with Azure Load Testing. Soft delete is enabled by default when you create a new key vault and cannot be disabled. You can enable purge protection at any time.
+
+# [Azure portal](#tab/portal)
+
+To learn how to create a key vault with the Azure portal, see [Create a key vault using the Azure portal](/azure/key-vault/general/quick-create-portal). When you create the key vault, select **Enable purge protection**, as shown in the following image.
++
+To enable purge protection on an existing key vault, follow these steps:
+
+1. Navigate to your key vault in the Azure portal.
+1. Under **Settings**, choose **Properties**.
+1. In the **Purge protection** section, choose **Enable purge protection**.
+
+# [PowerShell](#tab/powershell)
+
+To create a new key vault with PowerShell, install version 2.0.0 or later of the [Az.KeyVault](https://www.powershellgallery.com/packages/Az.KeyVault/2.0.0) PowerShell module. Then call [New-AzKeyVault](/powershell/module/az.keyvault/new-azkeyvault) to create a new key vault. With version 2.0.0 and later of the Az.KeyVault module, soft delete is enabled by default when you create a new key vault.
+
+The following example creates a new key vault with both soft delete and purge protection enabled. Remember to replace the placeholder values in brackets with your own values.
+
+```azurepowershell
+$keyVault = New-AzKeyVault -Name <key-vault> `
+ -ResourceGroupName <resource_group> `
+ -Location <location> `
+ -EnablePurgeProtection
+```
+
+To learn how to enable purge protection on an existing key vault with PowerShell, see [Azure Key Vault recovery overview](/azure/key-vault/general/key-vault-recovery?tabs=azure-powershell).
+
+# [Azure CLI](#tab/azure-cli)
+
+To create a new key vault using Azure CLI, call [az keyvault create](/cli/azure/keyvault#az-keyvault-create). Remember to replace the placeholder values in brackets with your own values:
+
+```azurecli
+az keyvault create \
+ --name <key-vault> \
+ --resource-group <resource_group> \
+ --location <region> \
+ --enable-purge-protection
+```
+
+To learn how to enable purge protection on an existing key vault with Azure CLI, see [Azure Key Vault recovery overview](/azure/key-vault/general/key-vault-recovery?tabs=azure-cli).
+++
+## Add a key
+
+Next, add a key to the key vault. Azure Load Testing encryption supports RSA keys. For more information about supported key types, see [About keys](/azure/key-vault/keys/about-keys).
+
+# [Azure portal](#tab/portal)
+
+To learn how to add a key with the Azure portal, see [Set and retrieve a key from Azure Key Vault using the Azure portal](/azure/key-vault/keys/quick-create-portal).
+
+# [PowerShell](#tab/powershell)
+
+To add a key with PowerShell, call [Add-AzKeyVaultKey](/powershell/module/az.keyvault/add-azkeyvaultkey). Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples.
+
+```azurepowershell
+$key = Add-AzKeyVaultKey -VaultName $keyVault.VaultName `
+ -Name <key> `
+ -Destination 'Software'
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+To add a key with Azure CLI, call [az keyvault key create](/cli/azure/keyvault/key#az-keyvault-key-create). Remember to replace the placeholder values in brackets with your own values.
+
+```azurecli
+az keyvault key create \
+ --name <key> \
+ --vault-name <key-vault>
+```
+++
+## Add an access policy to your Azure Key Vault
+
+The user-assigned managed identity that you will use to configure customer-managed keys on Azure Load Testing resource must have appropriate permissions to access the key vault.
+
+1. From the Azure portal, go to the Azure Key Vault instance that you plan to use to host your encryption keys. Select **Access Policies** from the left menu:
+
+ :::image type="content" source="media/how-to-configure-customer-managed-keys/access-policies-azure-key-vault.png" alt-text="Screenshot that shows access policies option in Azure Key Vault.":::
+
+1. Select **+ Add Access Policy**.
+
+1. Under the **Key permissions** drop-down menu, select **Get**, **Unwrap Key**, and **Wrap Key** permissions:
+
+ :::image type="content" source="media/how-to-configure-customer-managed-keys/azure-key-vault-permissions.png" alt-text="Screenshot that shows Azure Key Vault permissions.":::
+
+1. Under **Select principal**, select **None selected**.
+
+1. Search for the user-assigned managed identity you created and select it.
+
+1. Choose **Select** at the bottom.
+
+1. Select **Add** to add the new access policy.
+
+1. Select **Save** on the Key Vault instance to save all changes.
+
+## Configure customer-managed keys for a new Azure Load Testing resource
+
+To configure customer-managed keys for a new Azure Load Testing resource, follow these steps:
+
+# [Azure portal](#tab/portal)
+
+1. In the Azure portal, navigate to the **Azure Load Testing** page, and select the **Create** button to create a new resource.
+
+1. Follow the steps outlined in [create an Azure Load Testing resource](/azure/load-testing/quickstart-create-and-run-load-test#create_resource) to fill out the fields on the **Basics** tab.
+
+1. Go to the **Encryption** tab. In the **Encryption type** field, select **Customer-managed keys (CMK)**.
+
+1. In the **Key URI** field, paste the URI/key identifier of the Azure Key Vault key including the key version.
+
+1. For the **User-assigned identity** field, select an existing user-assigned managed identity.
+
+1. Select **Review + create** to validate and create the new resource.
++
+# [PowerShell](#tab/powershell)
+
+You can deploy an ARM template using PowerShell to automate the creation of your Azure resources. You can create any resource of type `Microsoft.LoadTestService/loadtests` with customer managed key enabled for encryption by adding the following properties:
+
+```json
+"encryption": {
+ "keyUrl": "https://contosovault.vault.azure.net/keys/contosokek/abcdef01234567890abcdef012345678",
+ "identity": {
+ "type": "UserAssigned",
+ "resourceId": "User assigned managed identity resource id"
+ }
+
+```
+
+For example, an Azure Load Testing resource might look like the following:
+
+```json
+{
+ "type": "Microsoft.LoadTestService/loadtests",
+ "apiVersion": "2022-04-15-preview",
+ "name": "[parameters('name')]",
+ "location": "[parameters('location')]",
+ "tags": "[parameters('tags')]",
+ "identity": {
+ "type": "userassigned",
+ "userAssignedIdentities": {
+ "User assigned managed identity resource id": {}
+ }
+ },
+ "properties": {
+ "encryption": {
+ "identity": {
+ "type": "UserAssigned",
+ "resourceId": "User assigned managed identity resource id"
+ },
+ "keyUrl": "https://contosovault.vault.azure.net/keys/contosokek/abcdef01234567890abcdef012345678"
+ }
+ }
+}
+```
+
+Deploy the above template to a resource group, using [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment):
+
+```azurepowershell
+New-AzResourceGroupDeployment -ResourceGroupName <resource-group-name> -TemplateFile <path-to-template>
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+You can deploy an ARM template using Azure CLI to automate the creation of your Azure resources. You can create any resource of type `Microsoft.LoadTestService/loadtests` with customer managed key enabled for encryption by adding the following properties:
+
+```json
+"encryption": {
+ "keyUrl": "https://contosovault.vault.azure.net/keys/contosokek/abcdef01234567890abcdef012345678",
+ "identity": {
+ "type": "UserAssigned",
+ "resourceId": "User assigned managed identity resource id"
+ }
+```
+
+For example, an Azure Load Testing resource might look like the following:
+
+```json
+{
+ "type": "Microsoft.LoadTestService/loadtests",
+ "apiVersion": "2022-04-15-preview",
+ "name": "[parameters('name')]",
+ "location": "[parameters('location')]",
+ "tags": "[parameters('tags')]",
+ "identity": {
+ "type": "userassigned",
+ "userAssignedIdentities": {
+ "User assigned managed identity resource id": {}
+ }
+ },
+ "properties": {
+ "encryption": {
+ "identity": {
+ "type": "UserAssigned",
+ "resourceId": "User assigned managed identity resource id"
+ },
+ "keyUrl": "https://contosovault.vault.azure.net/keys/contosokek/abcdef01234567890abcdef012345678"
+ }
+ }
+}
+```
+
+Deploy the above template to a resource group, using [az deployment group create](/cli/azure/deployment/group#az-deployment-group-create):
+
+```azurecli-interactive
+az deployment group create --resource-group <resource-group-name> --template-file <path-to-template>
+```
+
+-
+
+## Change the managed identity
+
+You can change the managed identity for customer-managed keys for an existing Azure Load Testing resource at any time.
+
+1. Navigate to your Azure Load Testing resource.
+
+1. On the **Settings** page, select **Encryption**.
+
+ The **Encryption type** shows the encryption type you selected at resource creation time.
+
+1. If the encryption type is **Customer-managed keys**, select the type of identity to use to authenticate to the key vault. The options include **System-assigned** (the default) or **User-assigned**.
+
+ To learn more about each type of managed identity, see [Managed identity types](/azure/active-directory/managed-identities-azure-resources/overview#managed-identity-types).
+
+ - If you select System-assigned, the system-assigned managed identity needs to be enabled on the resource and granted access to the AKV before changing the identity for customer-managed keys.
+ - If you select **User-assigned**, you must select an existing user-assigned identity that has permissions to access the key vault. To learn how to create a user-assigned identity, see [Use managed identities for Azure Load Testing Preview](how-to-use-a-managed-identity.md).
+
+1. Save your changes.
++
+> [!NOTE]
+> The selected managed identity should have access granted on the Azure Key Vault.
+
+## Change the key
+
+You can change the key that you are using for Azure Load Testing encryption at any time. To change the key with the Azure portal, follow these steps:
+
+1. Navigate to your Azure Load Testing resource.
+
+1. On the **Settings** page, select **Encryption**. The **Encryption type** shows the encryption selected for the resource while creation.
+
+1. If the selected encryption type is *Customer-managed keys*, you can edit the key URI field with the new key URI.
+
+1. Save your changes.
+
+## Key rotation
+
+You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. To rotate a key, in Azure Key Vault, update the key version or create a new key. You can then update the Azure Load Testing resource to [encrypt data using the new key URI](#change-the-key).
+
+## Frequently asked questions
+
+### Is there an additional charge to enable customer-managed keys?
+
+No, there's no charge to enable this feature.
+
+### Are customer-managed keys supported for existing Azure Load Testing resources?
+
+This feature is currently only available for new Azure Load Testing resources.
+
+### How can I tell if customer-managed keys are enabled on my Azure Load Testing account?
+
+1. In the [Azure portal](https://portal.azure.com), go to your Azure Load Testing resource.
+1. Go to the **Encryption** item in the left navigation bar.
+1. You can verify the **Encryption type** on your resource.
+
+### How do I revoke an encryption key?
+
+You can revoke a key by disabling the latest version of the key in Azure Key Vault. Alternatively, to revoke all keys from an Azure Key Vault instance, you can delete the access policy granted to the managed identity of the Azure Load Testing resource.
+
+When you revoke the encryption key you may be able to run tests for about 10 minutes, after which the only available operation is resource deletion. It is recommended to rotate the key instead of revoking it to manage resource security and retain your data.
+
+## Next steps
+
+- Learn how to [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).
+- Learn how to [Parameterize a load test](./how-to-parameterize-load-tests.md).
load-testing Quickstart Create And Run Load Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/quickstart-create-and-run-load-test.md
In this section, you'll run the load test that you just created. If you selected
:::image type="content" source="./media/quickstart-create-and-run-loadtest/tests.png" alt-text="Screenshot that shows the list of load tests." :::
-1. On the page that shows test details, select **Run** or **Run test**.
-
- :::image type="content" source="./media/quickstart-create-and-run-loadtest/run-test.png" alt-text="Screenshot that shows the button for running a load test." :::
-
-1. On the **Run** confirmation page, optionally modify the test details. Then, select **Run** to start the load test.
+1. On the test details page, select **Run** or **Run test**. Then, select **Run** on the **Run test** confirmation pane to start the load test.
:::image type="content" source="./media/quickstart-create-and-run-loadtest/run-test-confirm.png" alt-text="Screenshot that shows the run confirmation page." :::
load-testing Tutorial Identify Bottlenecks Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-identify-bottlenecks-azure-portal.md
In this section, you'll use the Azure portal to manually start the load test tha
>[!TIP] > You can use the search box and the **Time range** filter to limit the number of tests.
-1. On the page that shows test details, select **Run** or **Run test**.
+1. On the test details page, select **Run** or **Run test**. Then, select **Run** on the **Run test** confirmation pane to start the load test.
:::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/test-runs-run.png" alt-text="Screenshot that shows selections for running a test." :::
-1. Select **Run** on the run summary page to start the load test. You'll then see the list of test runs.
-
- :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/test-run-list.png" alt-text="Screenshot that shows the run summary page." :::
- Azure Load Testing begins to monitor and display the application's server metrics on the dashboard. You can see the streaming client-side metrics while the test is running. By default, the results refresh automatically every five seconds.
For Azure Cosmos DB, increase the database RU scale setting:
Now that you've increased the database throughput, rerun the load test and verify that the performance results have improved:
-1. Return to the page that shows test run details and select **Rerun**. Then select **Run** on the run summary page.
+1. On the test run dashboard, select **Rerun**, and then select **Rerun** on the **Rerun test** pane.
:::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/rerun-test.png" alt-text="Screenshot that shows selections for running the load test.":::
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
On Consumption logic apps only, you can set up permissions so that only specific
For example, suppose you have to work with a logic app that you didn't create and authenticate connections used by that logic app's workflow. Your Azure subscription requires Contributor permissions for the resource group that contains that logic app resource. If you create a logic app resource, you automatically have Contributor access.
-To prevent others from changing or deleting your logic app, you can use [Azure Resource Lock](../azure-resource-manager/management/lock-resources.md). This capability prevents others from changing or deleting production resources. For more information about connection security, review [Connection configuration in Azure Logic Apps](../connectors/apis-list.md#connection-configuration) and [Connection security and encryption](../connectors/apis-list.md#connection-security-encyrption).
+To prevent others from changing or deleting your logic app, you can use [Azure Resource Lock](../azure-resource-manager/management/lock-resources.md). This capability prevents others from changing or deleting production resources. For more information about connection security, review [Connection configuration in Azure Logic Apps](../connectors/apis-list.md#connection-configuration) and [Connection security and encryption](../connectors/apis-list.md#connection-security-encryption).
<a name="secure-run-history"></a>
logic-apps Set Up Zone Redundancy Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-zone-redundancy-availability-zones.md
During preview, the following considerations apply:
* The following list includes the Azure regions where you can currently enable availability zones with the list expanding as available:
+ * Australia East
* Brazil South * Canada Central
+ * Central US
+ * East US
+ * East US 2
* France Central * Japan East * South Central US
machine-learning Tutorial Automated Ml Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-automated-ml-forecast.md
For this experiment, deployment to a web service means that the bike share compa
Once the run is complete, navigate back to parent run page by selecting **Run 1** at the top of your screen.
-In the **Best model summary** section, **StackEnsemble** is considered the best model in the context of this experiment, based on the **Normalized root mean squared error** metric.
+In the **Best model summary** section, the best model in the context of this experiment, is selected based on the **Normalized root mean squared error metric.**
We deploy this model, but be advised, deployment takes about 20 minutes to complete. The deployment process entails several steps including registering the model, generating resources, and configuring them for the web service.
-1. Select **StackEnsemble** to open the model-specific page.
+1. Select **the best model** to open the model-specific page.
1. Select the **Deploy** button located in the top-left area of the screen.
marketplace Gtm Marketing Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/gtm-marketing-best-practices.md
Previously updated : 06/23/2021 Last updated : 05/10/2022 # Marketing best practices
migrate Troubleshoot Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-dependencies.md
If your Azure Migrate project has private endpoint connectivity, the request to
## Common agentless dependency analysis errors
-Azure Migrate supports agentless dependency analysis by using Azure Migrate: Discovery and assessment.[Learn more](how-to-create-group-machine-dependencies-agentless.md) about how to perform agentless dependency analysis.
+Azure Migrate supports agentless dependency analysis by using Azure Migrate: Discovery and assessment. [Learn more](how-to-create-group-machine-dependencies-agentless.md) about how to perform agentless dependency analysis.
For VMware VMs, agentless dependency analysis is performed by connecting to the servers via the vCenter Server using the VMware APIs. For Hyper-V VMs and physical servers, agentless dependency analysis is performed by directly connecting to Windows servers using PowerShell remoting on port 5985 (HTTP) and to Linux servers using SSH connectivity on port 22 (TCP).
mysql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-limits.md
The following sections describe capacity, storage engine support, privilege supp
Azure Database for MySQL supports tuning the values of server parameters. The min and max value of some parameters (ex. `max_connections`, `join_buffer_size`, `query_cache_size`) is determined by the pricing tier and vCores of the server. Refer to [server parameters](./concepts-server-parameters.md) for more information about these limits.
-Upon initial deployment, an Azure for MySQL server includes systems tables for time zone information, but these tables are not populated. The time zone tables can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench. Refer to the [Azure portal](howto-server-parameters.md#working-with-the-time-zone-parameter) or [Azure CLI](howto-configure-server-parameters-using-cli.md#working-with-the-time-zone-parameter) articles for how to call the stored procedure and set the global or session-level time zones.
+Upon initial deployment, an Azure for MySQL server includes systems tables for time zone information, but these tables aren't populated. The time zone tables can be populated by calling the `mysql.az_load_timezone` stored procedure from a tool like the MySQL command line or MySQL Workbench. Refer to the [Azure portal](howto-server-parameters.md#working-with-the-time-zone-parameter) or [Azure CLI](howto-configure-server-parameters-using-cli.md#working-with-the-time-zone-parameter) articles for how to call the stored procedure and set the global or session-level time zones.
-Password plugins such as "validate_password" and "caching_sha2_password" are not supported by the service.
+Password plugins such as "validate_password" and "caching_sha2_password" aren't supported by the service.
## Storage engines
MySQL supports many storage engines. On Azure Database for MySQL, the following
## Privileges & data manipulation support
-Many server parameters and settings can inadvertently degrade server performance or negate ACID properties of the MySQL server. To maintain the service integrity and SLA at a product level, this service does not expose multiple roles.
+Many server parameters and settings can inadvertently degrade server performance or negate ACID properties of the MySQL server. To maintain the service integrity and SLA at a product level, this service doesn't expose multiple roles.
-The MySQL service does not allow direct access to the underlying file system. Some data manipulation commands are not supported.
+The MySQL service doesn't allow direct access to the underlying file system. Some data manipulation commands aren't supported.
### Unsupported
The following are unsupported:
- DBA role: Restricted. Alternatively, you can use the administrator user (created during new server creation), allows you to perform most of DDL and DML statements. - SUPER privilege: Similarly, [SUPER privilege](https://dev.mysql.com/doc/refman/5.7/en/privileges-provided.html#priv_super) is restricted. - DEFINER: Requires super privileges to create and is restricted. If importing data using a backup, remove the `CREATE DEFINER` commands manually or by using the `--skip-definer` command when performing a [mysqlpump](https://dev.mysql.com/doc/refman/5.7/en/mysqlpump.html).-- System databases: The [mysql system database](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) is read-only and used to support various PaaS functionality. You cannot make changes to the `mysql` system database.
+- System databases: The [mysql system database](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) is read-only and used to support various PaaS functionality. You can't make changes to the `mysql` system database.
- `SELECT ... INTO OUTFILE`: Not supported in the service. - `LOAD_FILE(file_name)`: Not supported in the service.
+- [BACKUP_ADMIN](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_backup-admin) privilege: Granting BACKUP_ADMIN privilege is not supported for taking backups using any [utility tools](./how-to-decide-on-right-migration-tools.md).
### Supported - `LOAD DATA INFILE` is supported, but the `[LOCAL]` parameter must be specified and directed to a UNC path (Azure storage mounted through SMB). Additionally, if you are using MySQL client version >= 8.0 you need to include `-ΓÇôlocal-infile=1` parameter in your connection string.
mysql Concepts Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-limitations.md
The following are unsupported:
- DEFINER: Requires super privileges to create and is restricted. If importing data using a backup, remove the `CREATE DEFINER` commands manually or by using the `--skip-definer` command when performing a mysqldump. - System databases: The [mysql system database](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) is read-only and used to support various PaaS functionality. You cannot make changes to the `mysql` system database. - `SELECT ... INTO OUTFILE`: Not supported in the service.
+- [BACKUP_ADMIN](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_backup-admin) privilege: Granting BACKUP_ADMIN privilege is not supported for taking backups using any [utility tools](../how-to-decide-on-right-migration-tools.md).
### Supported - `LOAD DATA INFILE` is supported, but the `[LOCAL]` parameter must be specified and directed to a UNC path (Azure storage mounted through SMB). Additionally, if you are using MySQL client version >= 8.0 you need to include `-ΓÇôlocal-infile=1` parameter in your connection string.
mysql Quickstart Mysql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-mysql-github-actions.md
Previously updated : 02/14/2022 Last updated : 05/09/2022
The file has two sections:
|Section |Tasks | |||
-|**Authentication** | 1. Define a service principal. <br /> 2. Create a GitHub secret. |
+|**Authentication** | 1. Generate deployment credentials. |
|**Deploy** | 1. Deploy the database. | ## Generate deployment credentials
+# [Service principal](#tab/userlevel)
You can create a [service principal](../active-directory/develop/app-objects-and-service-principals.md) with the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac&preserve-view=true) command in the [Azure CLI](/cli/azure/). Run this command with [Azure Cloud Shell](https://shell.azure.com/) in the Azure portal or by selecting the **Try it** button.
The output is a JSON object with the role assignment credentials that provide ac
> [!IMPORTANT] > It's always a good practice to grant minimum access. The scope in the previous example is limited to the specific server and not the entire resource group.
+# [OpenID Connect](#tab/openid)
+
+You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
+
+1. Open your GitHub repository and go to **Settings**.
+
+1. Select **Settings > Secrets > New secret**.
+
+1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
+
+ |GitHub Secret | Active Directory Application |
+ |||
+ |AZURE_CLIENT_ID | Application (client) ID |
+ |AZURE_TENANT_ID | Directory (tenant) ID |
+ |AZURE_SUBSCRIPTION_ID | Subscription ID |
+
+1. Save each secret by selecting **Add secret**.
+++ ## Copy the MySQL connection string In the Azure portal, go to your Azure Database for MySQL server and open **Settings** > **Connection strings**. Copy the **ADO.NET** connection string. Replace the placeholder values for `your_database` and `your_password`. The connection string will look similar to the following.
In the Azure portal, go to your Azure Database for MySQL server and open **Setti
You'll use the connection string as a GitHub secret. ## Configure GitHub secrets
+# [Service principal](#tab/userlevel)
1. In [GitHub](https://github.com/), browse your repository.
You'll use the connection string as a GitHub secret.
5. Paste the connection string value into the secret's value field. Give the secret the name `AZURE_MYSQL_CONNECTION_STRING`.
+# [OpenID Connect](#tab/openid)
+
+You need to provide your application's **Client ID**, **Tenant ID**, and **Subscription ID** to the login action. These values can either be provided directly in the workflow or can be stored in GitHub secrets and referenced in your workflow. Saving the values as GitHub secrets is the more secure option.
+
+1. Open your GitHub repository and go to **Settings**.
+
+1. Select **Settings > Secrets > New secret**.
+
+1. Create secrets for `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_SUBSCRIPTION_ID`. Use these values from your Active Directory application for your GitHub secrets:
+
+ |GitHub Secret | Active Directory Application |
+ |||
+ |AZURE_CLIENT_ID | Application (client) ID |
+ |AZURE_TENANT_ID | Directory (tenant) ID |
+ |AZURE_SUBSCRIPTION_ID | Subscription ID |
+
+1. Save each secret by selecting **Add secret**.
+++ ## Add your workflow 1. Go to **Actions** for your GitHub repository.
You'll use the connection string as a GitHub secret.
on: push:
- branches: [ master ]
+ branches: [ main ]
pull_request:
- branches: [ master ]
+ branches: [ main ]
``` 4. Rename your workflow `MySQL for GitHub Actions` and add the checkout and login actions. These actions will check out your site code and authenticate with Azure using the `AZURE_CREDENTIALS` GitHub secret you created earlier.
+ # [Service principal](#tab/userlevel)
+
```yaml name: MySQL for GitHub Actions-
+
on: push:
- branches: [ master ]
+ branches: [ main ]
pull_request:
- branches: [ master ]
-
+ branches: [ main ]
+
jobs: build: runs-on: windows-latest
You'll use the connection string as a GitHub secret.
- uses: azure/login@v1 with: creds: ${{ secrets.AZURE_CREDENTIALS }}
+ ```
+
+ # [OpenID Connect](#tab/openid)
+
+ ```yaml
+ name: MySQL for GitHub Actions
+
+ on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+
+ jobs:
+ build:
+ runs-on: windows-latest
+ steps:
+ - uses: actions/checkout@v1
+ - uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
```
+
+ ___
5. Use the Azure MySQL Deploy action to connect to your MySQL instance. Replace `MYSQL_SERVER_NAME` with the name of your server. You should have a MySQL data file named `data.sql` at the root level of your repository.
You'll use the connection string as a GitHub secret.
6. Complete your workflow by adding an action to sign out of Azure. Here's the completed workflow. The file will appear in the `.github/workflows` folder of your repository.
+ # [Service principal](#tab/userlevel)
+
```yaml name: MySQL for GitHub Actions-
+
on: push:
- branches: [ master ]
+ branches: [ main ]
pull_request:
- branches: [ master ]
+ branches: [ main ]
jobs: build: runs-on: windows-latest
You'll use the connection string as a GitHub secret.
- uses: azure/login@v1 with: creds: ${{ secrets.AZURE_CREDENTIALS }}-
+
- uses: azure/mysql@v1 with: server-name: MYSQL_SERVER_NAME connection-string: ${{ secrets.AZURE_MYSQL_CONNECTION_STRING }} sql-file: './data.sql'-
+
+ # Azure logout
+ - name: logout
+ run: |
+ az logout
+ ```
+ # [OpenID Connect](#tab/openid)
+
+ ```yaml
+ name: MySQL for GitHub Actions
+
+ on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+ jobs:
+ build:
+ runs-on: windows-latest
+ steps:
+ - uses: actions/checkout@v1
+ - uses: azure/login@v1
+ with:
+ client-id: ${{ secrets.AZURE_CLIENT_ID }}
+ tenant-id: ${{ secrets.AZURE_TENANT_ID }}
+ subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
+ - uses: azure/mysql@v1
+ with:
+ server-name: MYSQL_SERVER_NAME
+ connection-string: ${{ secrets.AZURE_MYSQL_CONNECTION_STRING }}
+ sql-file: './data.sql'
+
# Azure logout - name: logout run: | az logout ```
+ ___
## Review your deployment
network-watcher View Relative Latencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/view-relative-latencies.md
> [!WARNING] > This feature is currently under deprecation.
+> [!NOTE]
+> If you are interested in measuring the network latency from your end users to your selected set of network destinations you may want to try the [Azure Internet Analyzer tool (preview)](../internet-analyzer/internet-analyzer-overview.md).
+ In this tutorial, learn how to use the Azure [Network Watcher](network-watcher-monitoring-overview.md) service to help you decide what Azure region to deploy your application or service in, based on your user demographic. Additionally, you can use it to help evaluate service providers' connections to Azure.
postgresql Overview Postgres Choose Server Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/overview-postgres-choose-server-options.md
With Azure, your PostgreSQL Server workloads can run in a hosted virtual machine
When making your decision, consider the following three options in PaaS or alternatively running on Azure VMs (IaaS) - [Azure Database for PostgreSQL Single Server](./overview-single-server.md) - [Azure Database for PostgreSQL Flexible Server](./flexible-server/overview.md)-- [Azure Database for PostgreSQL Hyperscale (Citus)](hyperscale/index.yml)
+- [Azure Database for PostgreSQL Hyperscale (Citus)](./hyperscale/overview.md)
**PostgreSQL on Azure VMs** option falls into the industry category of IaaS. With this service, you can run PostgreSQL Server inside a fully managed virtual machine on the Azure cloud platform. All recent versions and editions of PostgreSQL can be installed on an IaaS virtual machine. In the most significant difference from Azure Database for PostgreSQL, PostgreSQL on Azure VMs offers control over the database engine. However, this control comes at the cost of responsibility to manage the VMs and many database administration (DBA) tasks. These tasks include maintaining and patching database servers, database recovery, and high-availability design.
remote-rendering Spatial Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/spatial-queries.md
void CastRay(ApiHandle<RenderingSession> session)
// trace a line from the origin into the +z direction, over 10 units of distance. RayCast rayCast; rayCast.StartPos = {0, 0, 0};
- rayCast.EndPos = {0, 0, 1};
- rayCast.MaxHits = 10;
+ rayCast.EndPos = {0, 0, 10};
// only return the closest hit rayCast.HitCollection = HitCollectionPolicy::ClosestHit;
search Cognitive Search Skill Document Extraction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-document-extraction.md
Title: Document Extraction cognitive skill description: Extracts content from a file within the enrichment pipeline.-+ + Last updated 12/12/2021-+ # Document Extraction cognitive skill
search Cognitive Search Skill Image Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-image-analysis.md
Title: Image Analysis cognitive skill
description: Extract semantic text through image analysis using the Image Analysis cognitive skill in an AI enrichment pipeline in Azure Cognitive Search. --+++ Previously updated : 04/27/2022 Last updated : 05/06/2022 # Image Analysis cognitive skill
Microsoft.Skills.Vision.ImageAnalysisSkill
Parameters are case-sensitive.
-| Parameter name | Description |
+| Parameter name | Description |
|--|-|
-| `defaultLanguageCode` | A string indicating the language to return. The service returns recognition results in a specified language. If this parameter is not specified, the default value is "en". <br/><br/>Supported languages are: <br/>*en* - English (default) <br/> *es* - Spanish <br/> *ja* - Japanese <br/> *pt* - Portuguese <br/> *zh* - Simplified Chinese|
-| `visualFeatures` | An array of strings indicating the visual feature types to return. Valid visual feature types include: <ul><li>*adult* - detects if the image is pornographic in nature (depicts nudity or a sex act), or is gory (depicts extreme violence or blood). Sexually suggestive content (also known as racy content) is also detected.</li><li>*brands* - detects various brands within an image, including the approximate location. The *brands* visual feature is only available in English.</li><li> *categories* - categorizes image content according to a taxonomy defined in the Cognitive Services [Computer Vision documentation](../cognitive-services/computer-vision/category-taxonomy.md). </li><li>*description* - describes the image content with a complete sentence in supported languages.</li><li>*faces* - detects if faces are present. If present, generates coordinates, gender and age.</li><li> *objects* - detects various objects within an image, including the approximate location. The *objects* visual feature is only available in English.</li><li> *tags* - tags the image with a detailed list of words related to the image content.</li></ul> Names of visual features are case-sensitive. Note that the *color* and *imageType* visual features have been deprecated, but this functionality could still be accessed via a [custom skill](./cognitive-search-custom-skill-interface.md).|
+| `defaultLanguageCode` | A string indicating the language to return. The service returns recognition results in a specified language. If this parameter isn't specified, the default value is "en". <br/><br/>Supported languages are: <br/>`en` - English (default) <br/>`es` - Spanish <br/>`ja` - Japanese <br/>`pt` - Portuguese <br/>`zh` - Simplified Chinese|
+| `visualFeatures` | An array of strings indicating the visual feature types to return. Valid visual feature types include: <ul><li>*adult* - detects if the image is pornographic (depicts nudity or a sex act), gory (depicts extreme violence or blood) or suggestive (also known as racy content). </li><li>*brands* - detects various brands within an image, including the approximate location. The *brands* visual feature is only available in English.</li><li> *categories* - categorizes image content according to a [taxonomy](../cognitive-services/Computer-vision/Category-Taxonomy.md) defined by Cognitive Services. </li><li>*description* - describes the image content with a complete sentence in supported languages.</li><li>*faces* - detects if faces are present. If present, generates coordinates, gender and age. </li><li>*objects* - detects various objects within an image, including the approximate location. The *objects* visual feature is only available in English.</li><li> *tags* - tags the image with a detailed list of words related to the image content.</li></ul> Names of visual features are case-sensitive. Both *color* and *imageType* visual features have been deprecated, but you can access this functionality through a [custom skill](./cognitive-search-custom-skill-interface.md).|
| `details` | An array of strings indicating which domain-specific details to return. Valid visual feature types include: <ul><li>*celebrities* - identifies celebrities if detected in the image.</li><li>*landmarks* - identifies landmarks if detected in the image. </li></ul> | ## Skill inputs | Input name | Description | |||
-| `image` | Complex Type. Currently only works with "/document/normalized_images" field, produced by the Azure Blob indexer when ```imageAction``` is set to a value other than ```none```. See the [sample](#sample-output) for more information.|
+| `image` | Complex Type. Currently only works with "/document/normalized_images" field, produced by the Azure Blob indexer when ```imageAction``` is set to a value other than ```none```. |
-<!-- ## Skill outputs
+## Skill outputs
-| Output name | Description |
+| Output name | Description |
||-|
-| `categories` | Complex type that ... |
-| `tags` | Complex type that ... |
-| `description` | Complex type that ... |
-| `faces` | Complex type that ... |
-| `brands` | Complex type that ... | -->
+| `adult` | Output is a single [adult](../cognitive-services/computer-vision/concept-detecting-adult-content.md) object of a complex type, consisting of boolean fields (`isAdultContent`, `isGoryContent`, `isRacyContent`) and double type scores (`adultScore`, `goreScore`, `racyScore`). |
+| `brands` | Output is an array of [brand](../cognitive-services/computer-vision/concept-brand-detection.md) objects, where the object is a complex type consisting of `name` (string) and a `confidence` score (double). It also returns a `rectangle` with four bounding box coordinates (`x`, `y`, `w`, `h`, in pixels) indicating placement inside the image. For the rectangle, `x` and `y` are the top left. Bottom left is `x`, `y+h`. Top right is `x+w`, `y`. Bottom right is `x+w`, `y+h`.|
+| `categories` | Output is an array of [category](../cognitive-services/computer-vision/concept-categorizing-images.md) objects, where each category object is a complex type consisting of a `name` (string), `score` (double), and optional `detail` that contains celebrity or landmark details. See the [category taxonomy](../cognitive-services/Computer-vision/Category-Taxonomy.md) for the full list of category names. A detail is a nested complex type. A celebrity detail consists of a name, confidence score, and face bounding box. A landmark detail consists of a name and confidence score.|
+| `description` | Output is a single [description](../cognitive-services/computer-vision/concept-describing-images.md) object of a complex type, consisting of lists of `tags` and `caption` (an array consisting of `Text` (string) and `confidence` (double)). |
+| `faces` | Complex type consisting of `age`, `gender`, and `faceBoundingBox` having four bounding box coordinates (in pixels) indicating placement inside the image. Coordinates are `top`, `left`, `width`, `height`.|
+| `objects` | Output is an array of [visual feature objects](../cognitive-services/computer-vision/concept-object-detection.md) Each object is a complex type, consisting of `object` (string), `confidence` (double), `rectangle` (with four bounding box coordinates indicating placement inside the image), and a `parent` that contains an object name and confidence . |
+| `tags` | Output is an array of [imageTag](../cognitive-services/computer-vision/concept-detecting-image-types.md) objects, where a tag object is a complex type consisting of `name` (string), `hint` (string), and `confidence` (double). The addition of a hint is rare. It's only generated if a tag is ambiguous. For example, an image tagged as "curling" might have a hint of "sports" to better indicate its content. |
## Sample skill definition ```json
+{
+ "description": "Extract image analysis.",
+ "@odata.type": "#Microsoft.Skills.Vision.ImageAnalysisSkill",
+ "context": "/document/normalized_images/*",
+ "defaultLanguageCode": "en",
+ "visualFeatures": [
+ "adult",
+ "brands",
+ "categories",
+ "description",
+ "faces",
+ "objects",
+ "tags"
+ ],
+ "inputs": [
{
- "description": "Extract image analysis.",
- "@odata.type": "#Microsoft.Skills.Vision.ImageAnalysisSkill",
- "context": "/document/normalized_images/*",
- "defaultLanguageCode": "en",
- "visualFeatures": [
- "tags",
- "categories",
- "description",
- "faces",
- "brands"
- ],
- "inputs": [
- {
- "name": "image",
- "source": "/document/normalized_images/*"
- }
- ],
- "outputs": [
- {
- "name": "categories"
- },
- {
- "name": "tags"
- },
- {
- "name": "description"
- },
- {
- "name": "faces"
- },
- {
- "name": "brands"
- }
- ]
+ "name": "image",
+ "source": "/document/normalized_images/*"
+ }
+ ],
+ "outputs": [
+ {
+ "name": "adult"
+ },
+ {
+ "name": "brands"
+ },
+ {
+ "name": "categories"
+ },
+ {
+ "name": "description"
+ },
+ {
+ "name": "faces"
+ },
+ {
+ "name": "objects"
+ },
+ {
+ "name": "tags"
}
+ ]
+}
```
-### Sample index (for only the categories, description, faces and tags fields)
+### Sample index
+
+For single objects (such as `adult` and `description`), you can structure them in the index as a `Collection(Edm.ComplexType)` to return `adult` and `description` output for all of them. For more information about mapping outputs to index fields, see [Flattening information from complex types](cognitive-search-output-field-mapping.md#flattening-information-from-complex-types).
```json {
Parameters are case-sensitive.
"filterable": false, "facetable": false },
+ {
+ "name": "adult",
+ "type": "Edm.ComplexType",
+ "fields": [
+ {
+ "name": "isAdultContent",
+ "type": "Edm.Boolean",
+ "searchable": false,
+ "filterable": true,
+ "facetable": true
+ },
+ {
+ "name": "isGoryContent",
+ "type": "Edm.Boolean",
+ "searchable": false,
+ "filterable": true,
+ "facetable": true
+ },
+ {
+ "name": "isRacyContent",
+ "type": "Edm.Boolean",
+ "searchable": false,
+ "filterable": true,
+ "facetable": true
+ },
+ {
+ "name": "adultScore",
+ "type": "Edm.Double",
+ "searchable": false,
+ "filterable": false,
+ "facetable": false
+ },
+ {
+ "name": "goreScore",
+ "type": "Edm.Double",
+ "searchable": false,
+ "filterable": false,
+ "facetable": false
+ },
+ {
+ "name": "racyScore",
+ "type": "Edm.Double",
+ "searchable": false,
+ "filterable": false,
+ "facetable": false
+ }
+ ]
+ },
+ {
+ "name": "brands",
+ "type": "Collection(Edm.ComplexType)",
+ "fields": [
+ {
+ "name": "name",
+ "type": "Edm.String",
+ "searchable": true,
+ "filterable": false,
+ "facetable": false
+ },
+ {
+ "name": "confidence",
+ "type": "Edm.Double",
+ "searchable": false,
+ "filterable": false,
+ "facetable": false
+ },
+ {
+ "name": "rectangle",
+ "type": "Edm.ComplexType",
+ "fields": [
+ {
+ "name": "x",
+ "type": "Edm.Int32",
+ "searchable": false,
+ "filterable": false,
+ "facetable": false
+ },
+ {
+ "name": "y",
+ "type": "Edm.Int32",
+ "searchable": false,
+ "filterable": false,
+ "facetable": false
+ },
+ {
+ "name": "w",
+ "type": "Edm.Int32",
+ "searchable": false,
+ "filterable": false,
+ "facetable": false
+ },
+ {
+ "name": "h",
+ "type": "Edm.Int32",
+ "searchable": false,
+ "filterable": false,
+ "facetable": false
+ }
+ ]
+ }
+ ]
+ },
{ "name": "categories", "type": "Collection(Edm.ComplexType)",
Parameters are case-sensitive.
{ "name": "faceBoundingBox", "type": "Collection(Edm.ComplexType)",
+ "fields": [
+ {
+ "name": "top",
+ "type": "Edm.Int32",
+ "searchable": false,
+ "filterable": false,
+ "facetable": false
+ },
+ {
+ "name": "left",
+ "type": "Edm.Int32",
+ "searchable": false,
+ "filterable": false,
+ "facetable": false
+ },
+ {
+ "name": "width",
+ "type": "Edm.Int32",
+ "searchable": false,
+ "filterable": false,
+ "facetable": false
+ },
+ {
+ "name": "height",
+ "type": "Edm.Int32",
+ "searchable": false,
+ "filterable": false,
+ "facetable": false
+ }
+ ]
+ }
+ ]
+ },
+ {
+ "name": "objects",
+ "type": "Collection(Edm.ComplexType)",
+ "fields": [
+ {
+ "name": "object",
+ "type": "Edm.String",
+ "searchable": true,
+ "filterable": false,
+ "facetable": false
+ },
+ {
+ "name": "confidence",
+ "type": "Edm.Double",
+ "searchable": false,
+ "filterable": false,
+ "facetable": false
+ },
+ {
+ "name": "rectangle",
+ "type": "Edm.ComplexType",
"fields": [ { "name": "x",
Parameters are case-sensitive.
"searchable": false, "filterable": false, "facetable": false
+ },
+ {
+ "name": "w",
+ "type": "Edm.Int32",
+ "searchable": false,
+ "filterable": false,
+ "facetable": false
+ },
+ {
+ "name": "h",
+ "type": "Edm.Int32",
+ "searchable": false,
+ "filterable": false,
+ "facetable": false
+ }
+ ]
+ },
+ {
+ "name": "parent",
+ "type": "Edm.ComplexType",
+ "fields": [
+ {
+ "name": "object",
+ "type": "Edm.String",
+ "searchable": true,
+ "filterable": false,
+ "facetable": false
+ },
+ {
+ "name": "confidence",
+ "type": "Edm.Double",
+ "searchable": false,
+ "filterable": false,
+ "facetable": false
} ] }
Parameters are case-sensitive.
"filterable": false, "facetable": false },
+ {
+ "name": "hint",
+ "type": "Edm.String",
+ "searchable": true,
+ "filterable": false,
+ "facetable": false
+ },
{ "name": "confidence", "type": "Edm.Double",
Parameters are case-sensitive.
```
-### Sample output field mapping (for the above index)
+### Sample output field mapping
+
+The target field can be a complex field or collection. The index definition specifies any subfields.
```json
- "outputFieldMappings": [
- {
- "sourceFieldName": "/document/normalized_images/*/categories/*",
- "targetFieldName": "categories"
- },
- {
- "sourceFieldName": "/document/normalized_images/*/tags/*",
- "targetFieldName": "tags"
- },
- {
- "sourceFieldName": "/document/normalized_images/*/description",
- "targetFieldName": "description"
- },
- {
- "sourceFieldName": "/document/normalized_images/*/faces/*",
- "targetFieldName": "faces"
- },
- {
- "sourceFieldName": "/document/normalized_images/*/brands/*/name",
- "targetFieldName": "brands"
- }
+"outputFieldMappings": [
+ {
+ "sourceFieldName": "/document/normalized_images/*/adult",
+ "targetFieldName": "adult"
+ },
+ {
+ "sourceFieldName": "/document/normalized_images/*/brands/*",
+ "targetFieldName": "brands"
+ },
+ {
+ "sourceFieldName": "/document/normalized_images/*/categories/*",
+ "targetFieldName": "categories"
+ },
+ {
+ "sourceFieldName": "/document/normalized_images/*/description",
+ "targetFieldName": "description"
+ },
+ {
+ "sourceFieldName": "/document/normalized_images/*/faces/*",
+ "targetFieldName": "faces"
+ },
+ {
+ "sourceFieldName": "/document/normalized_images/*/objects/*",
+ "targetFieldName": "objects"
+ },
+ {
+ "sourceFieldName": "/document/normalized_images/*/tags/*",
+ "targetFieldName": "tags"
+ }
``` ### Variation on output field mappings (nested properties)
-You can define output field mappings to lower-level properties, such as just landmarks or celebrities. In this case, make sure your index schema has a field to contain landmarks specifically.
+You can define output field mappings to lower-level properties, such as just celebrities or landmarks. In this case, make sure your index schema has a field to contain each detail specifically.
```json
- "outputFieldMappings": [
- {
- "sourceFieldName": "/document/normalized_images/*/categories/detail/celebrities/*",
- "targetFieldName": "celebrities"
- }
+"outputFieldMappings": [
+ {
+ "sourceFieldName": "/document/normalized_images/*/categories/detail/celebrities/*",
+ "targetFieldName": "celebrities"
+ },
+ {
+ "sourceFieldName": "/document/normalized_images/*/categories/detail/landmarks/*",
+ "targetFieldName": "landmarks"
+ }
``` ## Sample input
You can define output field mappings to lower-level properties, such as just lan
"confidence": 0.999028444 } ],
- "landmarks": [
- {
- "name": "Forbidden City",
- "confidence": 0.9978346
- }
- ]
+ "landmarks": [ ]
} } ],
You can define output field mappings to lower-level properties, such as just lan
} ] },
- "requestId": "0dbec5ad-a3d3-4f7e-96b4-dfd57efe967d",
- "metadata": {
- "width": 1500,
- "height": 1000,
- "format": "Jpeg"
- },
"faces": [ { "age": 44,
In the following error cases, no elements are extracted.
| Error Code | Description | ||-|
-| `NotSupportedLanguage` | The language provided is not supported. |
+| `NotSupportedLanguage` | The language provided isn't supported. |
| `InvalidImageUrl` | Image URL is badly formatted or not accessible.|
-| `InvalidImageFormat` | Input data is not a valid image. |
+| `InvalidImageFormat` | Input data isn't a valid image. |
| `InvalidImageSize` | Input image is too large. |
-| `NotSupportedVisualFeature` | Specified feature type is not valid. |
+| `NotSupportedVisualFeature` | Specified feature type isn't valid. |
| `NotSupportedImage` | Unsupported image, for example, child pornography. | | `InvalidDetails` | Unsupported domain-specific model. |
search Cognitive Search Skill Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-ocr.md
Title: OCR cognitive skill
description: Extract text from image files using optical character recognition (OCR) in an enrichment pipeline in Azure Cognitive Search. --+++ Last updated 04/27/2022
Parameters are case-sensitive.
| Parameter name | Description | |--|-|
-| `detectOrientation` | Enables autodetection of image orientation. <br/> Valid values: true / false.|
-| `defaultLanguageCode` | <p> Language code of the input text. Supported languages include: <br/> zh-Hans (ChineseSimplified) <br/> zh-Hant (ChineseTraditional) <br/>cs (Czech) <br/>da (Danish) <br/>nl (Dutch) <br/>en (English) <br/>fi (Finnish) <br/>fr (French) <br/> de (German) <br/>el (Greek) <br/> hu (Hungarian) <br/> it (Italian) <br/> ja (Japanese) <br/> ko (Korean) <br/> nb (Norwegian) <br/> pl (Polish) <br/> pt (Portuguese) <br/> ru (Russian) <br/> es (Spanish) <br/> sv (Swedish) <br/> tr (Turkish) <br/> ar (Arabic) <br/> ro (Romanian) <br/> sr-Cyrl (SerbianCyrillic) <br/> sr-Latn (SerbianLatin) <br/> sk (Slovak) <br/> unk (Unknown) <br/><br/> If the language code is unspecified or null, the language will be set to English. If the language is explicitly set to "unk", the language will be auto-detected. </p> |
-| `lineEnding` | The value to use between each detected line. Possible values: "Space", "CarriageReturn", "LineFeed". The default is "Space". |
+| `detectOrientation` | Detects image orientation. Valid values are `true` or `false`.|
+| `defaultLanguageCode` | Language code of the input text. Supported languages include: <br/> `zh-Hans` (ChineseSimplified) <br/> `zh-Hant` (ChineseTraditional) <br/>`cs` (Czech) <br/>`da` (Danish) <br/>`nl` (Dutch) <br/>`en` (English) <br/>`fi` (Finnish) <br/>`fr` (French) <br/>`de` (German) <br/>`el` (Greek) <br/>`hu` (Hungarian) <br/>`it` (Italian) <br/>`ja` (Japanese) <br/>`ko` (Korean) <br/>`nb` (Norwegian) <br/>`pl` (Polish) <br/>`pt` (Portuguese) <br/>`ru` (Russian) <br/>`es` (Spanish) <br/>`sv` (Swedish) <br/>`tr` (Turkish) <br/>`ar` (Arabic) <br/>`ro` (Romanian) <br/>`sr-Cyrl` (SerbianCyrillic) <br/>`sr-Latn` (SerbianLatin) <br/>`sk` (Slovak) <br/>`unk` (Unknown) <br/><br/> If the language code is unspecified or null, the language is set to English. If the language is explicitly set to `unk`, the language is auto-detected. </p> |
+| `lineEnding` | The value to use as a line separator. Possible values: "Space", "CarriageReturn", "LineFeed". The default is "Space". |
-Previously, there was a parameter called "textExtractionAlgorithm" for specifying whether the skill should extract "printed" or "handwritten" text. This parameter is deprecated and no longer necessary as the latest Read API algorithm is capable of extracting both types of text at once. If your skill definition already includes this parameter, you do not need to remove it, but it will no longer be used and both types of text will be extracted going forward regardless of what it is set to.
+In previous versions, there was a parameter called "textExtractionAlgorithm" to specify extraction of "printed" or "handwritten" text. This parameter is deprecated because the current Read API algorithm extracts both types of text at once. If your skill includes this parameter, you don't need to remove it, but it won't be used during skill execution.
## Skill inputs | Input name | Description | |||
-| `image` | Complex Type. Currently only works with "/document/normalized_images" field, produced by the Azure Blob indexer when ```imageAction``` is set to a value other than ```none```. See the [sample](#sample-output) for more information.|
+| `image` | Complex Type. Currently only works with "/document/normalized_images" field, produced by the Azure Blob indexer when ```imageAction``` is set to a value other than ```none```. |
## Skill outputs
Previously, there was a parameter called "textExtractionAlgorithm" for specifyin
} ```
-## Sample: Merging text extracted from embedded images with the content of the document.
+## Sample: Merging text extracted from embedded images with the content of the document
-A common use case for Text Merger is the ability to merge the textual representation of images (text from an OCR skill, or the caption of an image) into the content field of a document.
+Document cracking, the first step in skillset execution, separates text and image content. A common use case for Text Merger is merging the textual representation of images (text from an OCR skill, or the caption of an image) into the content field of a document. This is for scenarios where the source document is a PDF or Word document that combines text with embedded images.
The following example skillset creates a *merged_text* field. This field contains the textual content of your document and the OCRed text from each of the images embedded in that document.
search Search Dotnet Sdk Migration Version 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration-version-1.md
- Title: Upgrade to Azure Search .NET SDK version 1.1-
-description: Migrate code to the Azure Search .NET SDK version 1.1 from older API versions. Learn what's new and what code changes are required.
------ Previously updated : 09/16/2021---
-# Upgrade to Azure Search .NET SDK version 1.1
-
-If you're using version 1.0.2-preview or older of the [Azure Search .NET SDK](/dotnet/api/overview/azure/search), this article will help you upgrade your application to use version 1.1.
-
-For a more general walkthrough of the SDK including examples, see [How to use Azure Search from a .NET Application](search-howto-dotnet-sdk.md).
-
-> [!NOTE]
-> Once you upgrade to version 1.1, or if you're already using a version between 1.1 and 2.0-preview inclusive, you should upgrade to version 3. See [Upgrading to the Azure Search .NET SDK version 3](search-dotnet-sdk-migration.md) for instructions.
->
-
-First, update your NuGet reference for `Microsoft.Azure.Search` using either the NuGet Package Manager Console or by right-clicking on your project references and selecting "Manage NuGet Packages..." in Visual Studio.
-
-Once NuGet has downloaded the new packages and their dependencies, rebuild your project.
-
-If you were previously using version 1.0.0-preview, 1.0.1-preview, or 1.0.2-preview, the build should succeed and you're ready to go!
-
-If you were previously using version 0.13.0-preview or older, you should see build errors like the following:
-
-```output
-Program.cs(137,56,137,62): error CS0117: 'Microsoft.Azure.Search.Models.IndexBatch' does not contain a definition for 'Create'
-Program.cs(137,99,137,105): error CS0117: 'Microsoft.Azure.Search.Models.IndexAction' does not contain a definition for 'Create'
-Program.cs(146,41,146,54): error CS1061: 'Microsoft.Azure.Search.IndexBatchException' does not contain a definition for 'IndexResponse' and no extension method 'IndexResponse' accepting a first argument of type 'Microsoft.Azure.Search.IndexBatchException' could be found (are you missing a using directive or an assembly reference?)
-Program.cs(163,13,163,42): error CS0246: The type or namespace name 'DocumentSearchResponse' could not be found (are you missing a using directive or an assembly reference?)
-```
-
-The next step is to fix the build errors one by one. Most will require changing some class and method names that have been renamed in the SDK. [List of breaking changes in version 1.1](#ListOfChangesV1) contains a list of these name changes.
-
-If you're using custom classes to model your documents, and those classes have properties of non-nullable primitive types (for example, `int` or `bool` in C#), there is a bug fix in the 1.1 version of the SDK of which you should be aware. See [Bug fixes in version 1.1](#BugFixesV1) for more details.
-
-Finally, once you've fixed any build errors, you can make changes to your application to take advantage of new functionality if you wish.
-
-<a name="ListOfChangesV1"></a>
-
-## List of breaking changes in version 1.1
-The following list is ordered by the likelihood that the change will affect your application code.
-
-### IndexBatch and IndexAction changes
-`IndexBatch.Create` has been renamed to `IndexBatch.New` and no longer has a `params` argument. You can use `IndexBatch.New` for batches that mix different types of actions (merges, deletes, etc.). In addition, there are new static methods for creating batches where all the actions are the same: `Delete`, `Merge`, `MergeOrUpload`, and `Upload`.
-
-`IndexAction` no longer has public constructors and its properties are now immutable. You should use the new static methods for creating actions for different purposes: `Delete`, `Merge`, `MergeOrUpload`, and `Upload`. `IndexAction.Create` has been removed. If you used the overload that takes only a document, make sure to use `Upload` instead.
-
-#### Example
-If your code looks like this:
-
-```csharp
-var batch = IndexBatch.Create(documents.Select(doc => IndexAction.Create(doc)));
-indexClient.Documents.Index(batch);
-```
-
-You can change it to this to fix any build errors:
-
-```csharp
-var batch = IndexBatch.New(documents.Select(doc => IndexAction.Upload(doc)));
-indexClient.Documents.Index(batch);
-```
-
-If you want, you can further simplify it to this:
-
-```csharp
-var batch = IndexBatch.Upload(documents);
-indexClient.Documents.Index(batch);
-```
-
-### IndexBatchException changes
-The `IndexBatchException.IndexResponse` property has been renamed to `IndexingResults`, and its type is now `IList<IndexingResult>`.
-
-#### Example
-If your code looks like this:
-
-```csharp
-catch (IndexBatchException e)
-{
- Console.WriteLine(
- "Failed to index some of the documents: {0}",
- String.Join(", ", e.IndexResponse.Results.Where(r => !r.Succeeded).Select(r => r.Key)));
-}
-```
-
-You can change it to this to fix any build errors:
-
-```csharp
-catch (IndexBatchException e)
-{
- Console.WriteLine(
- "Failed to index some of the documents: {0}",
- String.Join(", ", e.IndexingResults.Where(r => !r.Succeeded).Select(r => r.Key)));
-}
-```
-
-<a name="OperationMethodChanges"></a>
-
-### Operation method changes
-Each operation in the Azure Search .NET SDK is exposed as a set of method overloads for synchronous and asynchronous callers. The signatures and factoring of these method overloads has changed in version 1.1.
-
-For example, the "Get Index Statistics" operation in older versions of the SDK exposed these signatures:
-
-In `IIndexOperations`:
-
-```csharp
-// Asynchronous operation with all parameters
-Task<IndexGetStatisticsResponse> GetStatisticsAsync(
- string indexName,
- CancellationToken cancellationToken);
-```
-
-In `IndexOperationsExtensions`:
-
-```csharp
-// Asynchronous operation with only required parameters
-public static Task<IndexGetStatisticsResponse> GetStatisticsAsync(
- this IIndexOperations operations,
- string indexName);
-
-// Synchronous operation with only required parameters
-public static IndexGetStatisticsResponse GetStatistics(
- this IIndexOperations operations,
- string indexName);
-```
-
-The method signatures for the same operation in version 1.1 look like this:
-
-In `IIndexesOperations`:
-
-```csharp
-// Asynchronous operation with lower-level HTTP features exposed
-Task<AzureOperationResponse<IndexGetStatisticsResult>> GetStatisticsWithHttpMessagesAsync(
- string indexName,
- SearchRequestOptions searchRequestOptions = default(SearchRequestOptions),
- Dictionary<string, List<string>> customHeaders = null,
- CancellationToken cancellationToken = default(CancellationToken));
-```
-
-In `IndexesOperationsExtensions`:
-
-```csharp
-// Simplified asynchronous operation
-public static Task<IndexGetStatisticsResult> GetStatisticsAsync(
- this IIndexesOperations operations,
- string indexName,
- SearchRequestOptions searchRequestOptions = default(SearchRequestOptions),
- CancellationToken cancellationToken = default(CancellationToken));
-
-// Simplified synchronous operation
-public static IndexGetStatisticsResult GetStatistics(
- this IIndexesOperations operations,
- string indexName,
- SearchRequestOptions searchRequestOptions = default(SearchRequestOptions));
-```
-
-Starting with version 1.1, the Azure Search .NET SDK organizes operation methods differently:
-
-* Optional parameters are now modeled as default parameters rather than additional method overloads. This reduces the number of method overloads, sometimes dramatically.
-* The extension methods now hide a lot of the extraneous details of HTTP from the caller. For example, older versions of the SDK returned a response object with an HTTP status code, which you often didn't need to check because operation methods throw `CloudException` for any status code that indicates an error. The new extension methods just return model objects, saving you the trouble of having to unwrap them in your code.
-* Conversely, the core interfaces now expose methods that give you more control at the HTTP level if you need it. You can now pass in custom HTTP headers to be included in requests, and the new `AzureOperationResponse<T>` return type gives you direct access to the `HttpRequestMessage` and `HttpResponseMessage` for the operation. `AzureOperationResponse` is defined in the `Microsoft.Rest.Azure` namespace and replaces `Hyak.Common.OperationResponse`.
-
-### ScoringParameters changes
-A new class named `ScoringParameter` has been added in the latest SDK to make it easier to provide parameters to scoring profiles in a search query. Previously the `ScoringProfiles` property of the `SearchParameters` class was typed as `IList<string>`; Now it is typed as `IList<ScoringParameter>`.
-
-#### Example
-If your code looks like this:
-
-```csharp
-var sp = new SearchParameters();
-sp.ScoringProfile = "jobsScoringFeatured"; // Use a scoring profile
-sp.ScoringParameters = new[] { "featuredParam-featured", "mapCenterParam-" + lon + "," + lat };
-```
-
-You can change it to this to fix any build errors:
-
-```csharp
-var sp = new SearchParameters();
-sp.ScoringProfile = "jobsScoringFeatured"; // Use a scoring profile
-sp.ScoringParameters =
- new[]
- {
- new ScoringParameter("featuredParam", new[] { "featured" }),
- new ScoringParameter("mapCenterParam", GeographyPoint.Create(lat, lon))
- };
-```
-
-### Model class changes
-Due to the signature changes described in [Operation method changes](#OperationMethodChanges), many classes in the `Microsoft.Azure.Search.Models` namespace have been renamed or removed. For example:
-
-* `IndexDefinitionResponse` has been replaced by `AzureOperationResponse<Index>`
-* `DocumentSearchResponse` has been renamed to `DocumentSearchResult`
-* `IndexResult` has been renamed to `IndexingResult`
-* `Documents.Count()` now returns a `long` with the document count instead of a `DocumentCountResponse`
-* `IndexGetStatisticsResponse` has been renamed to `IndexGetStatisticsResult`
-* `IndexListResponse` has been renamed to `IndexListResult`
-
-To summarize, `OperationResponse`-derived classes that existed only to wrap a model object have been removed. The remaining classes have had their suffix changed from `Response` to `Result`.
-
-#### Example
-If your code looks like this:
-
-```csharp
-IndexerGetStatusResponse statusResponse = null;
-
-try
-{
- statusResponse = _searchClient.Indexers.GetStatus(indexer.Name);
-}
-catch (Exception ex)
-{
- Console.WriteLine("Error polling for indexer status: {0}", ex.Message);
- return;
-}
-
-IndexerExecutionResult lastResult = statusResponse.ExecutionInfo.LastResult;
-```
-
-You can change it to this to fix any build errors:
-
-```csharp
-IndexerExecutionInfo status = null;
-
-try
-{
- status = _searchClient.Indexers.GetStatus(indexer.Name);
-}
-catch (Exception ex)
-{
- Console.WriteLine("Error polling for indexer status: {0}", ex.Message);
- return;
-}
-
-IndexerExecutionResult lastResult = status.LastResult;
-```
-
-#### Response classes and IEnumerable
-An additional change that may affect your code is that response classes that hold collections no longer implement `IEnumerable<T>`. Instead, you can access the collection property directly. For example, if your code looks like this:
-
-```csharp
-DocumentSearchResponse<Hotel> response = indexClient.Documents.Search<Hotel>(searchText, sp);
-foreach (SearchResult<Hotel> result in response)
-{
- Console.WriteLine(result.Document);
-}
-```
-
-You can change it to this to fix any build errors:
-
-```csharp
-DocumentSearchResult<Hotel> response = indexClient.Documents.Search<Hotel>(searchText, sp);
-foreach (SearchResult<Hotel> result in response.Results)
-{
- Console.WriteLine(result.Document);
-}
-```
-
-#### Special case for web applications
-If you have a web application that serializes `DocumentSearchResponse` directly to send search results to the browser, you will need to change your code or the results will not serialize correctly. For example, if your code looks like this:
-
-```csharp
-public ActionResult Search(string q = "")
-{
- // If blank search, assume they want to search everything
- if (string.IsNullOrWhiteSpace(q))
- q = "*";
-
- return new JsonResult
- {
- JsonRequestBehavior = JsonRequestBehavior.AllowGet,
- Data = _featuresSearch.Search(q)
- };
-}
-```
-
-You can change it by getting the `.Results` property of the search response to fix search result rendering:
-
-```csharp
-public ActionResult Search(string q = "")
-{
- // If blank search, assume they want to search everything
- if (string.IsNullOrWhiteSpace(q))
- q = "*";
-
- return new JsonResult
- {
- JsonRequestBehavior = JsonRequestBehavior.AllowGet,
- Data = _featuresSearch.Search(q).Results
- };
-}
-```
-
-You will have to look for such cases in your code yourself; **The compiler will not warn you** because `JsonResult.Data` is of type `object`.
-
-### CloudException changes
-The `CloudException` class has moved from the `Hyak.Common` namespace to the `Microsoft.Rest.Azure` namespace. Also, its `Error` property has been renamed to `Body`.
-
-### SearchServiceClient and SearchIndexClient changes
-The type of the `Credentials` property has changed from `SearchCredentials` to its base class, `ServiceClientCredentials`. If you need to access the `SearchCredentials` of a `SearchIndexClient` or `SearchServiceClient`, please use the new `SearchCredentials` property.
-
-In older versions of the SDK, `SearchServiceClient` and `SearchIndexClient` had constructors that took an `HttpClient` parameter. These have been replaced with constructors that take an `HttpClientHandler` and an array of `DelegatingHandler` objects. This makes it easier to install custom handlers to pre-process HTTP requests if necessary.
-
-Finally, the constructors that took a `Uri` and `SearchCredentials` have changed. For example, if you have code that looks like this:
-
-```csharp
-var client =
- new SearchServiceClient(
- new SearchCredentials("abc123"),
- new Uri("http://myservice.search.windows.net"));
-```
-
-You can change it to this to fix any build errors:
-
-```csharp
-var client =
- new SearchServiceClient(
- new Uri("http://myservice.search.windows.net"),
- new SearchCredentials("abc123"));
-```
-
-Also note that the type of the credentials parameter has changed to `ServiceClientCredentials`. This is unlikely to affect your code since `SearchCredentials` is derived from `ServiceClientCredentials`.
-
-### Passing a request ID
-In older versions of the SDK, you could set a request ID on the `SearchServiceClient` or `SearchIndexClient` and it would be included in every request to the REST API. This is useful for troubleshooting issues with your search service if you need to contact support. However, it is more useful to set a unique request ID for every operation rather than to use the same ID for all operations. For this reason, the `SetClientRequestId` methods of `SearchServiceClient` and `SearchIndexClient` have been removed. Instead, you can pass a request ID to each operation method via the optional `SearchRequestOptions` parameter.
-
-> [!NOTE]
-> In a future release of the SDK, we will add a new mechanism for setting a request ID globally on the client objects that is consistent with the approach used by other Azure SDKs.
->
->
-
-### Example
-If you have code that looks like this:
-
-```csharp
-client.SetClientRequestId(Guid.NewGuid());
-...
-long count = client.Documents.Count();
-```
-
-You can change it to this to fix any build errors:
-
-```csharp
-long count = client.Documents.Count(new SearchRequestOptions(requestId: Guid.NewGuid()));
-```
-
-### Interface name changes
-The operation group interface names have all changed to be consistent with their corresponding property names:
-
-* The type of `ISearchServiceClient.Indexes` has been renamed from `IIndexOperations` to `IIndexesOperations`.
-* The type of `ISearchServiceClient.Indexers` has been renamed from `IIndexerOperations` to `IIndexersOperations`.
-* The type of `ISearchServiceClient.DataSources` has been renamed from `IDataSourceOperations` to `IDataSourcesOperations`.
-* The type of `ISearchIndexClient.Documents` has been renamed from `IDocumentOperations` to `IDocumentsOperations`.
-
-This change is unlikely to affect your code unless you created mocks of these interfaces for test purposes.
-
-<a name="BugFixesV1"></a>
-
-## Bug fixes in version 1.1
-There was a bug in older versions of the Azure Search .NET SDK relating to serialization of custom model classes. The bug could occur if you created a custom model class with a property of a non-nullable value type.
-
-### Steps to reproduce
-Create a custom model class with a property of non-nullable value type. For example, add a public `UnitCount` property of type `int` instead of `int?`.
-
-If you index a document with the default value of that type (for example, 0 for `int`), the field will be null in Azure Search. If you subsequently search for that document, the `Search` call will throw `JsonSerializationException` complaining that it can't convert `null` to `int`.
-
-Also, filters may not work as expected since null was written to the index instead of the intended value.
-
-### Fix details
-We have fixed this issue in version 1.1 of the SDK. Now, if you have a model class like this:
-
-```csharp
-public class Model
-{
- public string Key { get; set; }
-
- public int IntValue { get; set; }
-}
-```
-
-and you set `IntValue` to 0, that value is now correctly serialized as 0 on the wire and stored as 0 in the index. Round tripping also works as expected.
-
-There is one potential issue to be aware of with this approach: If you use a model type with a non-nullable property, you have to **guarantee** that no documents in your index contain a null value for the corresponding field. Neither the SDK nor the Azure Search REST API will help you to enforce this.
-
-This is not just a hypothetical concern: Imagine a scenario where you add a new field to an existing index that is of type `Edm.Int32`. After updating the index definition, all documents will have a null value for that new field (since all types are nullable in Azure Search). If you then use a model class with a non-nullable `int` property for that field, you will get a `JsonSerializationException` like this when trying to retrieve documents:
-
-```output
-Error converting value {null} to type 'System.Int32'. Path 'IntValue'.
-```
-
-For this reason, we still recommend that you use nullable types in your model classes as a best practice.
-
-For more details on this bug and the fix, please see [this issue on GitHub](https://github.com/Azure/azure-sdk-for-net/issues/1063).
search Search Dotnet Sdk Migration Version 10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration-version-10.md
- Title: Upgrade to .NET SDK version 10-
-description: Migrate code to the Azure Cognitive Search .NET SDK version 10 from older versions. Learn what is new and which code changes are required.
------ Previously updated : 03/21/2022---
-# Upgrade to Azure Cognitive Search .NET SDK version 10
-
-> [!IMPORTANT]
-> Version 10 is the last version of the Microsoft.Azure.Search package and it's now deprecated. It's succeeded by Azure.Search.Documents. If you're using older versions of Microsoft.Azure.Search, we recommend a sequential migration path. For example, if you're using version 8.0-preview or older, you should upgrade to version 9 first, and then to version 10, and finally to version 11.
-
-If you're using version 9.0 or older of the [.NET SDK](/dotnet/api/overview/azure/search), this article will help you upgrade your application to use version 10.
-
-"Azure Search" is renamed to "Azure Cognitive Search" in version 10, but namespaces and package names are unchanged. Previous versions of the SDK (9.0 and earlier) continue to use the "Microsoft.Search" prefix. For more information about using the SDK, including examples, see [How to use Azure Cognitive Search from a .NET Application](search-howto-dotnet-sdk.md).
-
-Version 10 adds several features and bug fixes, bringing it to the same functional level as the REST API version `2019-05-06`. In cases where a change breaks existing code, we'll walk you through the [steps required to resolve the issue](#UpgradeSteps).
-
-<a name="WhatsNew"></a>
-
-## What's new in version 10
-Version 10 of the Azure Cognitive Search .NET SDK targets REST API `2019-05-06` with these updates:
-
-* Introduction of two new skills - [Conditional skill](cognitive-search-skill-conditional.md) and [Text Translation skill](cognitive-search-skill-text-translation.md).
-* [Shaper skill](cognitive-search-skill-shaper.md) inputs have been restructured to accommodate consolidation from nested contexts. For more information, see this [example JSON definition](./cognitive-search-skill-shaper.md#scenario-3-input-consolidation-from-nested-contexts).
-* Addition of two new [field mapping functions](search-indexer-field-mappings.md):
- - [urlEncode](./search-indexer-field-mappings.md#urlencode-function)
- - [urlDecode](./search-indexer-field-mappings.md#urldecode-function)
-* On certain occasions, errors and warnings that show up in [indexer execution status](/rest/api/searchservice/get-indexer-status) can have additional details that help in debugging. `IndexerExecutionResult` has been updated to reflect this behavior.
-* Individual skills defined within a [skillset](cognitive-search-defining-skillset.md) can optionally be identified by specifying a `name` property.
-* `ServiceLimits` shows limits for [complex types](./search-howto-complex-data-types.md) and `IndexerExecutionInfo` shows pertinent indexer limits/quotas.
-
-<a name="UpgradeSteps"></a>
-
-## Steps to upgrade
-
-1. Update your NuGet reference for `Microsoft.Azure.Search` using either the NuGet Package Manager Console or by right-clicking on your project references and selecting "Manage NuGet Packages..." in Visual Studio.
-
-2. Once NuGet has downloaded the new packages and their dependencies, rebuild your project.
-
-3. If your build fails, you will need to fix each build error. See [Breaking changes in version 10](#ListOfChanges) for details on how to resolve each potential build error.
-
-4. Once you've fixed any build errors or warnings, you can make changes to your application to take advantage of new functionality if you wish. New features in the SDK are detailed in [What's new in version 10](#WhatsNew).
-
-<a name="ListOfChanges"></a>
-
-## Breaking changes in version 10
-
-There are several breaking changes in version 10 that may require code changes in addition to rebuilding your application.
-
-> [!NOTE]
-> The list of changes below is not exhaustive. Some changes will likely not result in build errors, but are technically breaking since they break binary compatibility with assemblies that depend on earlier versions of the Azure Cognitive Search .NET SDK assemblies. Significant changes that fall under this category are also listed along with recommendations. Please rebuild your application when upgrading to version 10 to avoid any binary compatibility issues.
-
-### Custom Web API skill definition
-
-The definition of the [Custom Web API skill](cognitive-search-custom-skill-web-api.md) was incorrectly specified in version 9 and older.
-
-The model for `WebApiSkill` specified `HttpHeaders` as an object property that _contains_ a dictionary. Creating a skillset with a `WebApiSkill` constructed in this manner would result in an exception because the REST API would consider the request badly formed. This issue has been corrected, by making `HttpHeaders` **a top-level dictionary property** on the `WebApiSkill` model itself - which is considered a valid request from the REST API.
-
-For example, if you previously attempted to instantiate a `WebApiSkill` as follows:
-
-```csharp
-
-var webApiSkill = new WebApiSkill(
- inputs,
- outputs,
- uri: "https://contoso.example.org")
-{
- HttpHeaders = new WebApiHttpHeaders()
- {
- Headers = new Dictionary<string, string>()
- {
- ["header"] = "value"
- }
- }
-};
-
-```
-
-change it to the following, to avoid the validation error from the REST API:
-
-```csharp
-
-var webApiSkill = new WebApiSkill(
- inputs,
- outputs,
- uri: "https://contoso.example.org")
-{
- HttpHeaders = new Dictionary<string, string>()
- {
- ["header"] = "value"
- }
-};
-
-```
-
-## Shaper skill allows nested context consolidation
-
-Shaper skill can now allow input consolidation from nested contexts. To enable this change, we modified `InputFieldMappingEntry` so that it can be instantiated by specifying just a `Source` property, or both the `SourceContext` and `Inputs` properties.
-
-You will most likely not need to make any code changes; however note that only one of these two combinations is allowed. This means:
--- Creating an `InputFieldMappingEntry` where only `Source` is initialized is valid.-- Creating an `InputFieldMappingEntry` where only `SourceContext` and `Inputs` are initialized is valid.-- All other combinations involving those three properties are invalid.-
-If you decide to start making use of this new capability, make sure all your clients are updated to use version 10 first, before rolling out that change. Otherwise, there is a possibility that an update by a client (using an older version of the SDK) to the Shaper skill may result in validation errors.
-
-> [!NOTE]
-> Even though the underlying `InputFieldMappingEntry` model has been modified to allow consolidation from nested contexts, it's use is only valid within the definition of a Shaper skill. Using this capability in other skills, while valid at compile time, will result in a validation error at runtime.
-
-## Skills can be identified by a name
-
-Each skill within a skillset now has a new property `Name`, which can be initialized in your code to help identify the skill. This is optional - when unspecified (which is the default, if no explicit code change was made), it is assigned a default name using the 1-based index of the skill in the skillset, prefixed with the '#' character. For example, in the following skillset definition (most initializations skipped for brevity):
-
-```csharp
-var skillset = new Skillset()
-{
- Skills = new List<Skill>()
- {
- new SentimentSkill(),
- new WebApiSkill(),
- new ShaperSkill(),
- ...
- }
-}
-```
-
-`SentimentSkill` is assigned a name `#1`, `WebApiSkill` is assigned `#2`, `ShaperSkill` is assigned `#3` and so on.
-
-If you choose to identify skills by a custom name, make sure to update all instances of your clients to version 10 of the SDK first. Otherwise, there is a possibility that a client using an older version of the SDK could `null` out the `Name` property of a skill, causing the client to fall back on the default naming scheme.
-
-## Details about errors and warnings
-
-`ItemError` and `ItemWarning` models that encapsulate details of errors and warnings (respectively) that occur during an indexer execution have been modified to include three new properties with the objective to aid in debugging the indexer. These properties are:
--- `Name`: The name of the source at which the error originated. For example, it could refer to a particular skill in the attached skillset.-- `Details`: Additional verbose details about the error or warning.-- `DocumentationLink`: A link to a troubleshooting guide for the specific error or warning.-
-> [!NOTE]
-> We have started to structure our errors and warnings to include these useful details whenever possible. We are working to make sure that for all errors and warnings these details are present, but it is a work in progress and these additional details may not always be populated.
-
-## Next steps
--- Changes to the Shaper skill have the most potential impact on new or existing code. As a next step, be sure to revisit this example illustrating the input structure: [Shaper skill JSON definition example](cognitive-search-skill-shaper.md)-- Go through the [AI enrichment overview](cognitive-search-concept-intro.md).-- We welcome your feedback on the SDK. If you encounter problems, feel free to ask us for help on [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-search). If you find a bug, you can file an issue in the [Azure .NET SDK GitHub repository](https://github.com/Azure/azure-sdk-for-net/issues). Make sure to prefix your issue title with "[Azure Cognitive Search]".
search Search Dotnet Sdk Migration Version 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration-version-5.md
- Title: Upgrade to Azure Search .NET SDK version 5-
-description: Migrate code to the Azure Search .NET SDK version 5 from older versions. Learn what is new and which code changes are required.
------ Previously updated : 09/16/2021--
-# Upgrade to Azure Search .NET SDK version 5
-
-If you're using version 4.0-preview or older of the [.NET SDK](/dotnet/api/overview/azure/search), this article will help you upgrade your application to use version 5.
-
-For a more general walkthrough of the SDK including examples, see [How to use Azure Search from a .NET Application](search-howto-dotnet-sdk.md).
-
-Version 5 of the Azure Search .NET SDK contains some changes from earlier versions. These are mostly minor, so changing your code should require only minimal effort. See [Steps to upgrade](#UpgradeSteps) for instructions on how to change your code to use the new SDK version.
-
-> [!NOTE]
-> If you're using version 2.0-preview or older, you should upgrade to version 3 first, and then upgrade to version 5. See [Upgrading to the Azure Search .NET SDK version 3](search-dotnet-sdk-migration.md) for instructions.
->
-> Your Azure Search service instance supports several REST API versions, including the latest one. You can continue to use a version when it is no longer the latest one, but we recommend that you migrate your code to use the newest version. When using the REST API, you must specify the API version in every request via the api-version parameter. When using the .NET SDK, the version of the SDK you're using determines the corresponding version of the REST API. If you are using an older SDK, you can continue to run that code with no changes even if the service is upgraded to support a newer API version.
-
-<a name="WhatsNew"></a>
-
-## What's new in version 5
-Version 5 of the Azure Search .NET SDK targets the latest generally available version of the Azure Search REST API, specifically 2017-11-11. This makes it possible to use new features of Azure Search from a .NET application, including the following:
-
-* [Synonyms](search-synonyms.md).
-* You can now programmatically access warnings in indexer execution history (see the `Warning` property of `IndexerExecutionResult` in the [.NET reference](/dotnet/api/microsoft.azure.search.models.indexerexecutionresult) for more details).
-* Support for .NET Core 2.
-* New package structure supports using only the parts of the SDK that you need (see [Breaking changes in version 5](#ListOfChanges) for details).
-
-<a name="UpgradeSteps"></a>
-
-## Steps to upgrade
-First, update your NuGet reference for `Microsoft.Azure.Search` using either the NuGet Package Manager Console or by right-clicking on your project references and selecting "Manage NuGet Packages..." in Visual Studio.
-
-Once NuGet has downloaded the new packages and their dependencies, rebuild your project. Depending on how your code is structured, it may rebuild successfully. If so, you're ready to go!
-
-If your build fails, you should see a build error like the following:
-
-```output
-The name 'SuggesterSearchMode' does not exist in the current context
-```
-
-The next step is to fix this build error. See [Breaking changes in version 5](#ListOfChanges) for details on what causes the error and how to fix it.
-
-Please note that due to changes in the packaging of the Azure Search .NET SDK, you must rebuild your application in order to use version 5. These changes are detailed in [Breaking changes in version 5](#ListOfChanges).
-
-You may see additional build warnings related to obsolete methods or properties. The warnings will include instructions on what to use instead of the deprecated feature. For example, if your application uses the `IndexingParametersExtensions.DoNotFailOnUnsupportedContentType` method, you should get a warning that says "This behavior is now enabled by default, so calling this method is no longer necessary."
-
-Once you've fixed any build errors or warnings, you can make changes to your application to take advantage of new functionality if you wish. New features in the SDK are detailed in [What's new in version 5](#WhatsNew).
-
-<a name="ListOfChanges"></a>
-
-## Breaking changes in version 5
-
-### New Package Structure
-
-The most substantial breaking change in version 5 is that the `Microsoft.Azure.Search` assembly and its contents have been divided into four separate assemblies that are now distributed as four separate NuGet packages:
-
-
-This change is technically breaking since many types were moved between assemblies. This is why rebuilding your application is necessary in order to upgrade to version 5 of the SDK.
-
-There a small number of other breaking changes in version 5 that may require code changes in addition to rebuilding your application.
-
-### Change to Suggesters
-
-The `Suggester` constructor no longer has an `enum` parameter for `SuggesterSearchMode`. This enum only had one value, and was therefore redundant. If you see build errors as a result of this, simply remove references to the `SuggesterSearchMode` parameter.
-
-### Removed obsolete members
-
-You may see build errors related to methods or properties that were marked as obsolete in earlier versions and subsequently removed in version 5. If you encounter such errors, here is how to resolve them:
--- If you were using the `IndexingParametersExtensions.IndexStorageMetadataOnly` method, use `SetBlobExtractionMode(BlobExtractionMode.StorageMetadata)` instead.-- If you were using the `IndexingParametersExtensions.SkipContent` method, use `SetBlobExtractionMode(BlobExtractionMode.AllMetadata)` instead.-
-### Removed preview features
-
-If you are upgrading from version 4.0-preview to version 5, be aware that JSON array and CSV parsing support for Blob Indexers has been removed since these features are still in preview. Specifically, the following methods of the `IndexingParametersExtensions` class have been removed:
--- `ParseJsonArrays`-- `ParseDelimitedTextFiles`-
-If your application has a hard dependency on these features, you will not be able to upgrade to version 5 of the Azure Search .NET SDK. You can continue to use version 4.0-preview. However, please keep in mind that **we do not recommend using preview SDKs in production applications**. Preview features are for evaluation only and may change.
-
-## Conclusion
-If you need more details on using the Azure Search .NET SDK, see the [.NET How-to](search-howto-dotnet-sdk.md).
-
-We welcome your feedback on the SDK. If you encounter problems, feel free to ask us for help on [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-search). If you find a bug, you can file an issue in the [Azure .NET SDK GitHub repository](https://github.com/Azure/azure-sdk-for-net/issues). Make sure to prefix your issue title with "[Azure Search]".
-
-Thank you for using Azure Search!
search Search Dotnet Sdk Migration Version 9 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration-version-9.md
- Title: Upgrade to Azure Search .NET SDK version 9-
-description: Migrate code to the Azure Search .NET SDK version 9 from older versions. Learn what is new and which code changes are required.
------ Previously updated : 09/16/2021---
-# Upgrade to Azure Search .NET SDK version 9
-
-If you're using version 7.0-preview or older of the [Azure Search .NET SDK](/dotnet/api/overview/azure/search), this article will help you upgrade your application to use version 9.
-
-> [!NOTE]
-> If you wish to use version 8.0-preview to evaluate features that are not generally available yet, you can also follow the instructions in this article to upgrade to 8.0-preview from prior versions.
-
-For a more general walkthrough of the SDK including examples, see [How to use Azure Search from a .NET Application](search-howto-dotnet-sdk.md).
-
-Version 9 of the Azure Search .NET SDK contains many changes from earlier versions. Some of these are breaking changes, but they should only require relatively minor changes to your code. See [Steps to upgrade](#UpgradeSteps) for instructions on how to change your code to use the new SDK version.
-
-> [!NOTE]
-> If you're using version 4.0-preview or older, you should upgrade to version 5 first, and then upgrade to version 9. See [Upgrading to the Azure Search .NET SDK version 5](search-dotnet-sdk-migration-version-5.md) for instructions.
->
-> Your Azure Search service instance supports several REST API versions, including the latest one. You can continue to use a version when it is no longer the latest one, but we recommend that you migrate your code to use the newest version. When using the REST API, you must specify the API version in every request via the api-version parameter. When using the .NET SDK, the version of the SDK you're using determines the corresponding version of the REST API. If you are using an older SDK, you can continue to run that code with no changes even if the service is upgraded to support a newer API version.
-
-<a name="WhatsNew"></a>
-
-## What's new in version 9
-Version 9 of the Azure Search .NET SDK targets 2019-05-06 version of Azure Search REST API, with the following features:
-
-* [AI enrichment](cognitive-search-concept-intro.md) is the ability to extract text from images, blobs, and other unstructured data sources - enriching the content to make it more searchable in an Azure Search index.
-* Support for [complex types](search-howto-complex-data-types.md) allows you to model almost any nested JSON structure in an Azure Search index.
-* [Autocomplete](search-add-autocomplete-suggestions.md) provides an alternative to the **Suggest** API for implementing search-as-you-type behavior. Autocomplete "finishes" the word or phrase that a user is currently typing.
-* [JsonLines parsing mode](search-howto-index-json-blobs.md), part of blob indexing, creates one search document per JSON entity that is separated by a newline.
-
-### New preview features in version 8.0-preview
-Version 8.0-preview of the Azure Search .NET SDK targets API version 2017-11-11-Preview. This version includes all the same features of version 9, plus:
-
-* [Customer-managed encryption keys](search-security-manage-encryption-keys.md) for service-side encryption-at-rest is a new preview feature. In addition to the built-in encryption-at-rest managed by Microsoft, you can apply an additional layer of encryption where you are the sole owner of the keys.
-
-<a name="UpgradeSteps"></a>
-
-## Steps to upgrade
-First, update your NuGet reference for `Microsoft.Azure.Search` using either the NuGet Package Manager Console or by right-clicking on your project references and selecting "Manage NuGet Packages..." in Visual Studio.
-
-Once NuGet has downloaded the new packages and their dependencies, rebuild your project. Depending on how your code is structured, it may rebuild successfully. If so, you're ready to go!
-
-If your build fails, you will need to fix each build error. See [Breaking changes in version 9](#ListOfChanges) for details on how to resolve each potential build error.
-
-You may see additional build warnings related to obsolete methods or properties. The warnings will include instructions on what to use instead of the deprecated feature. For example, if your application uses the `DataSourceType.DocumentDb` property, you should get a warning that says "This member is deprecated. Use CosmosDb instead".
-
-Once you've fixed any build errors or warnings, you can make changes to your application to take advantage of new functionality if you wish. New features in the SDK are detailed in [What's new in version 9](#WhatsNew).
-
-<a name="ListOfChanges"></a>
-
-## Breaking changes in version 9
-
-There are several breaking changes in version 9 that may require code changes in addition to rebuilding your application.
-
-> [!NOTE]
-> The list of changes below is not exhaustive. Some changes will likely not result in build errors, but are technically breaking since they break binary compatibility with assemblies that depend on earlier versions of the Azure Search .NET SDK assemblies. Such changes are not listed below. Please rebuild your application when upgrading to version 9 to avoid any binary compatibility issues.
-
-### Immutable properties
-
-The public properties of several model classes are now immutable. If you need to create custom instances of these classes for testing, you can use the new parameterized constructors:
-
- - `AutocompleteItem`
- - `DocumentSearchResult`
- - `DocumentSuggestResult`
- - `FacetResult`
- - `SearchResult`
- - `SuggestResult`
-
-### Changes to Field
-
-The `Field` class has changed now that it can also represent complex fields.
-
-The following `bool` properties are now nullable:
-
- - `IsFilterable`
- - `IsFacetable`
- - `IsSearchable`
- - `IsSortable`
- - `IsRetrievable`
- - `IsKey`
-
-This is because these properties must now be `null` in the case of complex fields. If you have code that reads these properties, it has to be prepared to handle `null`. Note that all other properties of `Field` have always been and continue to be nullable, and some of those will also be `null` in the case of complex fields -- specifically the following:
-
- - `Analyzer`
- - `SearchAnalyzer`
- - `IndexAnalyzer`
- - `SynonymMaps`
-
-The parameterless constructor of `Field` has been made `internal`. From now on, every `Field` requires an explicit name and data type at the time of construction.
-
-### Simplified batch and results types
-
-In version 7.0-preview and earlier, the various classes that encapsulate groups of documents were structured into parallel class hierarchies:
-
- - `DocumentSearchResult` and `DocumentSearchResult<T>` inherited from `DocumentSearchResultBase`
- - `DocumentSuggestResult` and `DocumentSuggestResult<T>` inherited from `DocumentSuggestResultBase`
- - `IndexAction` and `IndexAction<T>` inherited from `IndexActionBase`
- - `IndexBatch` and `IndexBatch<T>` inherited from `IndexBatchBase`
- - `SearchResult` and `SearchResult<T>` inherited from `SearchResultBase`
- - `SuggestResult` and `SuggestResult<T>` inherited from `SuggestResultBase`
-
-The derived types without a generic type parameter were meant to be used in "dynamically-typed" scenarios and assumed usage of the `Document` type.
-
-Starting with version 8.0-preview, the base classes and non-generic derived classes have all been removed. For dynamically-typed scenarios, you can use `IndexBatch<Document>`, `DocumentSearchResult<Document>`, and so on.
-
-### Removed ExtensibleEnum
-
-The `ExtensibleEnum` base class has been removed. All classes that derived from it are now structs, such as `AnalyzerName`, `DataType`, and `DataSourceType` for example. Their `Create` methods have also been removed. You can just remove calls to `Create` since these types are implicitly convertible from strings. If that results in compiler errors, you can explicitly invoke the conversion operator via casting to disambiguate types. For example, you can change code like this:
-
-```csharp
-var index = new Index()
-{
- Fields = new[]
- {
- new Field("id", DataType.String) { IsKey = true },
- new Field("message", AnalyzerName.Create("my_email_analyzer")) { IsSearchable = true }
- },
- ...
-}
-```
-
-to this:
-
-```csharp
-var index = new Index()
-{
- Fields = new[]
- {
- new Field("id", DataType.String) { IsKey = true },
- new Field("message", (AnalyzerName)"my_email_analyzer") { IsSearchable = true }
- },
- ...
-}
-```
-
-Properties that held optional values of these types are now explicitly typed as nullable so they continue to be optional.
-
-### Removed FacetResults and HitHighlights
-
-The `FacetResults` and `HitHighlights` classes have been removed. Facet results are now typed as `IDictionary<string, IList<FacetResult>>` and hit highlights as `IDictionary<string, IList<string>>`. A quick way to resolve build errors introduced by this change is to add `using` aliases at the top of each file that uses the removed types. For example:
-
-```csharp
-using FacetResults = System.Collections.Generic.IDictionary<string, System.Collections.Generic.IList<Models.FacetResult>>;
-using HitHighlights = System.Collections.Generic.IDictionary<string, System.Collections.Generic.IList<string>>;
-```
-
-### Change to SynonymMap
-
-The `SynonymMap` constructor no longer has an `enum` parameter for `SynonymMapFormat`. This enum only had one value, and was therefore redundant. If you see build errors as a result of this, simply remove references to the `SynonymMapFormat` parameter.
-
-### Miscellaneous model class changes
-
-The `AutocompleteMode` property of `AutocompleteParameters` is no longer nullable. If you have code that assigns this property to `null`, you can simply remove it and the property will automatically be initialized to the default value.
-
-The order of the parameters to the `IndexAction` constructor has changed now that this constructor is auto-generated. Instead of using the constructor, we recommend using the factory methods `IndexAction.Upload`, `IndexAction.Merge`, and so on.
-
-### Removed preview features
-
-If you are upgrading from version 8.0-preview to version 9, be aware that encryption with customer-managed keys has been removed since this feature is still in preview. Specifically, the `EncryptionKey` properties of `Index` and `SynonymMap` have been removed.
-
-If your application has a hard dependency on this feature, you will not be able to upgrade to version 9 of the Azure Search .NET SDK. You can continue to use version 8.0-preview. However, please keep in mind that **we do not recommend using preview SDKs in production applications**. Preview features are for evaluation only and may change.
-
-> [!NOTE]
-> If you created encrypted indexes or synonym maps using version 8.0-preview of the SDK, you will still be able use them and modify their definitions using version 9 of the SDK without adversely affecting their encryption status. Version 9 of the SDK will not send the `encryptionKey` property to the REST API, and as a result the REST API will not change the encryption status of the resource.
-
-### Behavioral change in data retrieval
-
-If you're using the "dynamically typed" `Search`, `Suggest`, or `Get` APIs that return instances of type `Document`, be aware that they now deserialize empty JSON arrays to `object[]` instead of `string[]`.
-
-## Conclusion
-If you need more details on using the Azure Search .NET SDK, see the [.NET How-to](search-howto-dotnet-sdk.md).
-
-We welcome your feedback on the SDK. If you encounter problems, feel free to ask us for help on [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-search). If you find a bug, you can file an issue in the [Azure .NET SDK GitHub repository](https://github.com/Azure/azure-sdk-for-net/issues). Make sure to prefix your issue title with "[Azure Search]".
-
-Thank you for using Azure Search!
search Search Dotnet Sdk Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration.md
- Title: Upgrade to Azure Search .NET SDK version 3-
-description: Migrate code to the Azure Search .NET SDK version 3 from older versions. Learn what's new and which code changes are required.
------ Previously updated : 09/16/2021----
-# Upgrade to Azure Search .NET SDK version 3
-
-<! DETAILS in the word doc
-cosmosdb
-NER v1 skill
-Indexer execution result errors no longer have status
-the data source API will no longer return in the response of any REST operation, the connection string specified by the user.
->
-
-If you're using version 2.0-preview or older of the [Azure Search .NET SDK](/dotnet/api/overview/azure/search), this article will help you upgrade your application to use version 3.
-
-For a more general walkthrough of the SDK including examples, see [How to use Azure Search from a .NET Application](search-howto-dotnet-sdk.md).
-
-Version 3 of the Azure Search .NET SDK contains some changes from earlier versions. These are mostly minor, so changing your code should require only minimal effort. See [Steps to upgrade](#UpgradeSteps) for instructions on how to change your code to use the new SDK version.
-
-> [!NOTE]
-> If you're using version 1.0.2-preview or older, you should upgrade to version 1.1 first, and then upgrade to version 3. See [Upgrading to the Azure Search .NET SDK version 1.1](search-dotnet-sdk-migration-version-1.md) for instructions.
->
-> Your Azure Search service instance supports several REST API versions, including the latest one. You can continue to use a version when it is no longer the latest one, but we recommend that you migrate your code to use the newest version. When using the REST API, you must specify the API version in every request via the api-version parameter. When using the .NET SDK, the version of the SDK you're using determines the corresponding version of the REST API. If you are using an older SDK, you can continue to run that code with no changes even if the service is upgraded to support a newer API version.
-
-<a name="WhatsNew"></a>
-
-## What's new in version 3
-Version 3 of the Azure Search .NET SDK targets the latest generally available version of the Azure Search REST API, specifically 2016-09-01. This makes it possible to use many new features of Azure Search from a .NET application, including the following:
-
-* [Custom analyzers](index-add-custom-analyzers.md)
-* [Azure Blob Storage](search-howto-indexing-azure-blob-storage.md) and [Azure Table Storage](search-howto-indexing-azure-tables.md) indexer support
-* Indexer customization via [field mappings](search-indexer-field-mappings.md)
-* ETags support to enable safe concurrent updating of index definitions, indexers, and data sources
-* Support for building index field definitions declaratively by decorating your model class and using the new `FieldBuilder` class.
-* Support for .NET Core and .NET Portable Profile 111
-
-<a name="UpgradeSteps"></a>
-
-## Steps to upgrade
-First, update your NuGet reference for `Microsoft.Azure.Search` using either the NuGet Package Manager Console or by right-clicking on your project references and selecting "Manage NuGet Packages..." in Visual Studio.
-
-Once NuGet has downloaded the new packages and their dependencies, rebuild your project. Depending on how your code is structured, it may rebuild successfully. If so, you're ready to go!
-
-If your build fails, you should see a build error like the following:
-
-```output
-Program.cs(31,45,31,86): error CS0266: Cannot implicitly convert type 'Microsoft.Azure.Search.ISearchIndexClient' to 'Microsoft.Azure.Search.SearchIndexClient'. An explicit conversion exists (are you missing a cast?)
-```
-
-The next step is to fix this build error. See [Breaking changes in version 3](#ListOfChanges) for details on what causes the error and how to fix it.
-
-You may see additional build warnings related to obsolete methods or properties. The warnings will include instructions on what to use instead of the deprecated feature. For example, if your application uses the `IndexingParameters.Base64EncodeKeys` property, you should get a warning that says `"This property is obsolete. Please create a field mapping using 'FieldMapping.Base64Encode' instead."`
-
-Once you've fixed any build errors, you can make changes to your application to take advantage of new functionality if you wish. New features in the SDK are detailed in [What's new in version 3](#WhatsNew).
-
-<a name="ListOfChanges"></a>
-
-## Breaking changes in version 3
-There a small number of breaking changes in version 3 that may require code changes in addition to rebuilding your application.
-
-### Indexes.GetClient return type
-The `Indexes.GetClient` method has a new return type. Previously, it returned `SearchIndexClient`, but this was changed to `ISearchIndexClient` in version 2.0-preview, and that change carries over to version 3. This is to support customers that wish to mock the `GetClient` method for unit tests by returning a mock implementation of `ISearchIndexClient`.
-
-#### Example
-If your code looks like this:
-
-```csharp
-SearchIndexClient indexClient = serviceClient.Indexes.GetClient("hotels");
-```
-
-You can change it to this to fix any build errors:
-
-```csharp
-ISearchIndexClient indexClient = serviceClient.Indexes.GetClient("hotels");
-```
-
-### AnalyzerName, DataType, and others are no longer implicitly convertible to strings
-There are many types in the Azure Search .NET SDK that derive from `ExtensibleEnum`. Previously these types were all implicitly convertible to type `string`. However, a bug was discovered in the `Object.Equals` implementation for these classes, and fixing the bug required disabling this implicit conversion. Explicit conversion to `string` is still allowed.
-
-#### Example
-If your code looks like this:
-
-```csharp
-var customTokenizerName = TokenizerName.Create("my_tokenizer");
-var customTokenFilterName = TokenFilterName.Create("my_tokenfilter");
-var customCharFilterName = CharFilterName.Create("my_charfilter");
-
-var index = new Index();
-index.Analyzers = new Analyzer[]
-{
- new CustomAnalyzer(
- "my_analyzer",
- customTokenizerName,
- new[] { customTokenFilterName },
- new[] { customCharFilterName }),
-};
-```
-
-You can change it to this to fix any build errors:
-
-```csharp
-const string CustomTokenizerName = "my_tokenizer";
-const string CustomTokenFilterName = "my_tokenfilter";
-const string CustomCharFilterName = "my_charfilter";
-
-var index = new Index();
-index.Analyzers = new Analyzer[]
-{
- new CustomAnalyzer(
- "my_analyzer",
- CustomTokenizerName,
- new TokenFilterName[] { CustomTokenFilterName },
- new CharFilterName[] { CustomCharFilterName })
-};
-```
-
-### Removed obsolete members
-
-You may see build errors related to methods or properties that were marked as obsolete in version 2.0-preview and subsequently removed in version 3. If you encounter such errors, here is how to resolve them:
--- If you were using this constructor: `ScoringParameter(string name, string value)`, use this one instead: `ScoringParameter(string name, IEnumerable<string> values)`-- If you were using the `ScoringParameter.Value` property, use the `ScoringParameter.Values` property or the `ToString` method instead.-- If you were using the `SearchRequestOptions.RequestId` property, use the `ClientRequestId` property instead.-
-### Removed preview features
-
-If you are upgrading from version 2.0-preview to version 3, be aware that JSON and CSV parsing support for Blob Indexers has been removed since these features are still in preview. Specifically, the following methods of the `IndexingParametersExtensions` class have been removed:
--- `ParseJson`-- `ParseJsonArrays`-- `ParseDelimitedTextFiles`-
-If your application has a hard dependency on these features, you will not be able to upgrade to version 3 of the Azure Search .NET SDK. You can continue to use version 2.0-preview. However, please keep in mind that **we do not recommend using preview SDKs in production applications**. Preview features are for evaluation only and may change.
-
-## Conclusion
-If you need more details on using the Azure Search .NET SDK, see the [.NET How-to](search-howto-dotnet-sdk.md).
-
-We welcome your feedback on the SDK. If you encounter problems, feel free to ask us for help on [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-search). If you find a bug, you can file an issue in the [Azure .NET SDK GitHub repository](https://github.com/Azure/azure-sdk-for-net/issues). Make sure to prefix your issue title with "[Azure Search]".
-
-Thank you for using Azure Search!
search Search Get Started Dotnet V10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-dotnet-v10.md
- Title: 'Legacy C# quickstart'-
-description: This C# quickstart uses the version 10 client library (Microsoft.Azure.Search) to create, load, and query a search index.
------ Previously updated : 10/27/2020---
-# Quickstart: Create a search index using the legacy Microsoft.Azure.Search v10 client library
-
-This article is the C# quickstart for the legacy [**Microsoft.Azure.Search**](/dotnet/api/overview/azure/search/client10) (version 10) client library, now superseded by the [**Azure.Search.Documents**](/dotnet/api/overview/azure/search.documents-readme) (version 11) client library.
-
-> [!NOTE]
-> If you have existing or inflight development projects, you can continue to use version 10. But for new projects, or to use new features, you should transition to the [new library](/dotnet/api/overview/azure/search.documents-readme).
-
-## About this quickstart
-
-Create a .NET Core console application in C# that creates, loads, and queries an Azure Cognitive Search index using Visual Studio and the [Microsoft.Azure.Search client libraries](/dotnet/api/overview/azure/search/client10).
-
-This article explains how to create the application. You could also [download and run the complete application](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/quickstart/v10).
-
-> [!NOTE]
-> The demo code in this article uses the synchronous methods of the Azure Cognitive Search version 10 .NET SDK for simplicity. However, for production scenarios, we recommend using the asynchronous methods in your own applications to keep them scalable and responsive. For example, you could use `CreateAsync` and `DeleteAsync` instead of `Create` and `Delete`.
-
-## Prerequisites
-
-Before you begin, you must have the following:
-
-+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
-
-+ An Azure Cognitive Search service. [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) under your current subscription. You can use a free service for this quickstart.
-
-+ [Visual Studio](https://visualstudio.microsoft.com/downloads/), any edition. Sample code and instructions were tested on the free Community edition.
-
-<a name="get-service-info"></a>
-
-## Get a key and URL
-
-Calls to the service require a URL endpoint and an access key on every request. A search service is created with both, so if you added Azure Cognitive Search to your subscription, follow these steps to get the necessary information:
-
-1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
-
-2. In **Settings** > **Keys**, get an admin key for full rights on the service. There are two interchangeable admin keys, provided for business continuity in case you need to roll one over. You can use either the primary or secondary key on requests for adding, modifying, and deleting objects.
-
- Get the query key as well. It's a best practice to issue query requests with read-only access.
-
-![Get an HTTP endpoint and access key](media/search-get-started-rest/get-url-key.png "Get an HTTP endpoint and access key")
-
-All requests require an api-key on every request sent to your service. Having a valid key establishes trust, on a per request basis, between the application sending the request and the service that handles it.
-
-## Set up your environment
-
-Begin by opening Visual Studio and creating a new Console App project that can run on .NET Core.
-
-### Install NuGet packages
-
-The [Microsoft.Azure.Search package](https://www.nuget.org/packages/Microsoft.Azure.Search/) consists of a few client libraries that are distributed as NuGet packages.
-
-For this project, use version 10 of the `Microsoft.Azure.Search` NuGet package and the latest `Microsoft.Extensions.Configuration.Json` NuGet package.
-
-1. In **Tools** > **NuGet Package Manager**, select **Manage NuGet Packages for Solution...**.
-
-1. Click **Browse**.
-
-1. Search for `Microsoft.Azure.Search` and select version 10.
-
-1. Click **Install** on the right to add the assembly to your project and solution.
-
-1. Repeat for `Microsoft.Extensions.Configuration.Json`, selecting version 2.2.0 or later.
--
-### Add Azure Cognitive Search service information
-
-1. In Solution Explorer, right click on the project and select **Add** > **New Item...** .
-
-1. In Add New Item, search for "JSON" to return a JSON-related list of item types.
-
-1. Choose **JSON File**, name the file "appsettings.json", and click **Add**.
-
-1. Add the file to your output directory. Right-click appsettings.json and select **Properties**. In **Copy to Output Directory**, select **Copy if newer**.
-
-1. Copy the following JSON into your new JSON file.
-
- ```json
- {
- "SearchServiceName": "<YOUR-SEARCH-SERVICE-NAME>",
- "SearchServiceAdminApiKey": "<YOUR-ADMIN-API-KEY>",
- "SearchIndexName": "hotels-quickstart"
- }
- ```
-
-1. Replace the search service name (YOUR-SEARCH-SERVICE-NAME) and admin API key (YOUR-ADMIN-API-KEY) with valid values. If your service endpoint is `https://mydemo.search.windows.net`, the service name would be "`mydemo`".
-
-### Add class ".Method" files to your project
-
-This step is required to produce meaningful output in the console. When printing results to the console window, individual fields from the Hotel object must be returned as strings. This step implements [ToString()](/dotnet/api/system.object.tostring) to perform this task, which you do by copying the necessary code to two new files.
-
-1. Add two empty class definitions to your project: Address.Methods.cs, Hotel.Methods.cs
-
-1. In Address.Methods.cs, overwrite the default contents with the following code, [lines 1-25](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/quickstart/v10/AzureSearchQuickstart/Address.Methods.cs#L1-L25).
-
-1. In Hotel.Methods.cs, copy [lines 1-68](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/quickstart/v10/AzureSearchQuickstart/Hotel.Methods.cs#L1-L68).
-
-## 1 - Create index
-
-The hotels index consists of simple and complex fields, where a simple field is "HotelName" or "Description", and complex fields are an address with subfields, or a collection of rooms. When an index includes complex types, isolate the complex field definitions in separate classes.
-
-1. Add two empty class definitions to your project: Address.cs, Hotel.cs
-
-1. In Address.cs, overwrite the default contents with the following code:
-
- ```csharp
- using System;
- using Microsoft.Azure.Search;
- using Microsoft.Azure.Search.Models;
- using Newtonsoft.Json;
-
- namespace AzureSearchQuickstart
- {
- public partial class Address
- {
- [IsSearchable]
- public string StreetAddress { get; set; }
-
- [IsSearchable, IsFilterable, IsSortable, IsFacetable]
- public string City { get; set; }
-
- [IsSearchable, IsFilterable, IsSortable, IsFacetable]
- public string StateProvince { get; set; }
-
- [IsSearchable, IsFilterable, IsSortable, IsFacetable]
- public string PostalCode { get; set; }
-
- [IsSearchable, IsFilterable, IsSortable, IsFacetable]
- public string Country { get; set; }
- }
- }
- ```
-
-1. In Hotel.cs, the class defines the overall structure of the index, including references to the address class.
-
- ```csharp
- namespace AzureSearchQuickstart
- {
- using System;
- using Microsoft.Azure.Search;
- using Microsoft.Azure.Search.Models;
- using Newtonsoft.Json;
-
- public partial class Hotel
- {
- [System.ComponentModel.DataAnnotations.Key]
- [IsFilterable]
- public string HotelId { get; set; }
-
- [IsSearchable, IsSortable]
- public string HotelName { get; set; }
-
- [IsSearchable]
- [Analyzer(AnalyzerName.AsString.EnMicrosoft)]
- public string Description { get; set; }
-
- [IsSearchable]
- [Analyzer(AnalyzerName.AsString.FrLucene)]
- [JsonProperty("Description_fr")]
- public string DescriptionFr { get; set; }
-
- [IsSearchable, IsFilterable, IsSortable, IsFacetable]
- public string Category { get; set; }
-
- [IsSearchable, IsFilterable, IsFacetable]
- public string[] Tags { get; set; }
-
- [IsFilterable, IsSortable, IsFacetable]
- public bool? ParkingIncluded { get; set; }
-
- [IsFilterable, IsSortable, IsFacetable]
- public DateTimeOffset? LastRenovationDate { get; set; }
-
- [IsFilterable, IsSortable, IsFacetable]
- public double? Rating { get; set; }
-
- public Address Address { get; set; }
- }
- }
- ```
-
- Attributes on the field determine how it is used in an application. For example, the `IsSearchable` attribute must be assigned to every field that should be included in a full text search.
-
- > [!NOTE]
- > In the .NET SDK, fields must be explicitly attributed as [`IsSearchable`](/dotnet/api/microsoft.azure.search.models.field.issearchable), [`IsFilterable`](/dotnet/api/microsoft.azure.search.models.field.isfilterable), [`IsSortable`](/dotnet/api/microsoft.azure.search.models.field.issortable), and [`IsFacetable`](/dotnet/api/microsoft.azure.search.models.field.isfacetable). This behavior is in contrast with the REST API which implicitly enables attribution based on data type (for example, simple string fields are automatically searchable).
-
- Exactly one field in your index of type `string` must be the *key* field, uniquely identifying each document. In this schema, the key is `HotelId`.
-
- In this index, the description fields use the optional [`analyzer`](/dotnet/api/microsoft.azure.search.models.field.analyzer) property, specified when you want to override the default standard Lucene analyzer. The `description_fr` field is using the French Lucene analyzer ([FrLucene](/dotnet/api/microsoft.azure.search.models.analyzername.frlucene)) because it stores French text. The `description` is using the optional Microsoft language analyzer ([EnMicrosoft](/dotnet/api/microsoft.azure.search.models.analyzername.enmicrosoft)).
-
-1. In Program.cs, create an instance of the [`SearchServiceClient`](/dotnet/api/microsoft.azure.search.searchserviceclient) class to connect to the service, using values that are stored in the application's config file (appsettings.json).
-
- `SearchServiceClient` has an [`Indexes`](/dotnet/api/microsoft.azure.search.searchserviceclient.indexes) property, providing all the methods you need to create, list, update, or delete Azure Cognitive Search indexes.
-
- ```csharp
- using System;
- using System.Linq;
- using System.Threading;
- using Microsoft.Azure.Search;
- using Microsoft.Azure.Search.Models;
- using Microsoft.Extensions.Configuration;
-
- namespace AzureSearchQuickstart
- {
- class Program {
- // Demonstrates index delete, create, load, and query
- // Commented-out code is uncommented in later steps
- static void Main(string[] args)
- {
- IConfigurationBuilder builder = new ConfigurationBuilder().AddJsonFile("appsettings.json");
- IConfigurationRoot configuration = builder.Build();
-
- SearchServiceClient serviceClient = CreateSearchServiceClient(configuration);
-
- string indexName = configuration["SearchIndexName"];
-
- Console.WriteLine("{0}", "Deleting index...\n");
- DeleteIndexIfExists(indexName, serviceClient);
-
- Console.WriteLine("{0}", "Creating index...\n");
- CreateIndex(indexName, serviceClient);
-
- // Uncomment next 3 lines in "2 - Load documents"
- // ISearchIndexClient indexClient = serviceClient.Indexes.GetClient(indexName);
- // Console.WriteLine("{0}", "Uploading documents...\n");
- // UploadDocuments(indexClient);
-
- // Uncomment next 2 lines in "3 - Search an index"
- // Console.WriteLine("{0}", "Searching index...\n");
- // RunQueries(indexClient);
-
- Console.WriteLine("{0}", "Complete. Press any key to end application...\n");
- Console.ReadKey();
- }
-
- // Create the search service client
- private static SearchServiceClient CreateSearchServiceClient(IConfigurationRoot configuration)
- {
- string searchServiceName = configuration["SearchServiceName"];
- string adminApiKey = configuration["SearchServiceAdminApiKey"];
-
- SearchServiceClient serviceClient = new SearchServiceClient(searchServiceName, new SearchCredentials(adminApiKey));
- return serviceClient;
- }
-
- // Delete an existing index to reuse its name
- private static void DeleteIndexIfExists(string indexName, SearchServiceClient serviceClient)
- {
- if (serviceClient.Indexes.Exists(indexName))
- {
- serviceClient.Indexes.Delete(indexName);
- }
- }
-
- // Create an index whose fields correspond to the properties of the Hotel class.
- // The Address property of Hotel will be modeled as a complex field.
- // The properties of the Address class in turn correspond to sub-fields of the Address complex field.
- // The fields of the index are defined by calling the FieldBuilder.BuildForType() method.
- private static void CreateIndex(string indexName, SearchServiceClient serviceClient)
- {
- var definition = new Microsoft.Azure.Search.Models.Index()
- {
- Name = indexName,
- Fields = FieldBuilder.BuildForType<Hotel>()
- };
-
- serviceClient.Indexes.Create(definition);
- }
- }
- }
- ```
-
- If possible, share a single instance of `SearchServiceClient` in your application to avoid opening too many connections. Class methods are thread-safe to enable such sharing.
-
- The class has several constructors. The one you want takes your search service name and a `SearchCredentials` object as parameters. `SearchCredentials` wraps your api-key.
-
- In the index definition, the easiest way to create the `Field` objects is by calling the `FieldBuilder.BuildForType` method, passing a model class for the type parameter. A model class has properties that map to the fields of your index. This mapping allows you to bind documents from your search index to instances of your model class.
-
- > [!NOTE]
- > If you don't plan to use a model class, you can still define your index by creating `Field` objects directly. You can provide the name of the field to the constructor, along with the data type (or analyzer for string fields). You can also set other properties like `IsSearchable`, `IsFilterable`, to name a few.
- >
-
-1. Press F5 to build the app and create the index.
-
- If the project builds successfully, a console window opens, writing status messages to the screen for deleting and creating the index.
-
-<a name="load-documents"></a>
-
-## 2 - Load documents
-
-In Azure Cognitive Search, documents are data structures that are both inputs to indexing and outputs from queries. As obtained from an external data source, document inputs might be rows in a database, blobs in Blob storage, or JSON documents on disk. In this example, we're taking a shortcut and embedding JSON documents for four hotels in the code itself.
-
-When uploading documents, you must use an [`IndexBatch`](/dotnet/api/microsoft.azure.search.models.indexbatch) object. An `IndexBatch` contains a collection of [`IndexAction`](/dotnet/api/microsoft.azure.search.models.indexaction) objects, each of which contains a document and a property telling Azure Cognitive Search what action to perform ([upload, merge, delete, and mergeOrUpload](search-what-is-data-import.md#indexing-actions)).
-
-1. In Program.cs, create an array of documents and index actions, and then pass the array to `IndexBatch`. The documents below conform to the hotel-quickstart index, as defined by the hotel and address classes.
-
- ```csharp
- // Upload documents as a batch
- private static void UploadDocuments(ISearchIndexClient indexClient)
- {
- var actions = new IndexAction<Hotel>[]
- {
- IndexAction.Upload(
- new Hotel()
- {
- HotelId = "1",
- HotelName = "Secret Point Motel",
- Description = "The hotel is ideally located on the main commercial artery of the city in the heart of New York. A few minutes away is Time's Square and the historic centre of the city, as well as other places of interest that make New York one of America's most attractive and cosmopolitan cities.",
- DescriptionFr = "L'hôtel est idéalement situé sur la principale artère commerciale de la ville en plein cœur de New York. A quelques minutes se trouve la place du temps et le centre historique de la ville, ainsi que d'autres lieux d'intérêt qui font de New York l'une des villes les plus attractives et cosmopolites de l'Amérique.",
- Category = "Boutique",
- Tags = new[] { "pool", "air conditioning", "concierge" },
- ParkingIncluded = false,
- LastRenovationDate = new DateTimeOffset(1970, 1, 18, 0, 0, 0, TimeSpan.Zero),
- Rating = 3.6,
- Address = new Address()
- {
- StreetAddress = "677 5th Ave",
- City = "New York",
- StateProvince = "NY",
- PostalCode = "10022",
- Country = "USA"
- }
- }
- ),
- IndexAction.Upload(
- new Hotel()
- {
- HotelId = "2",
- HotelName = "Twin Dome Motel",
- Description = "The hotel is situated in a nineteenth century plaza, which has been expanded and renovated to the highest architectural standards to create a modern, functional and first-class hotel in which art and unique historical elements coexist with the most modern comforts.",
- DescriptionFr = "L'hôtel est situé dans une place du XIXe siècle, qui a été agrandie et rénovée aux plus hautes normes architecturales pour créer un hôtel moderne, fonctionnel et de première classe dans lequel l'art et les éléments historiques uniques coexistent avec le confort le plus moderne.",
- Category = "Boutique",
- Tags = new[] { "pool", "free wifi", "concierge" },
- ParkingIncluded = false,
- LastRenovationDate = new DateTimeOffset(1979, 2, 18, 0, 0, 0, TimeSpan.Zero),
- Rating = 3.60,
- Address = new Address()
- {
- StreetAddress = "140 University Town Center Dr",
- City = "Sarasota",
- StateProvince = "FL",
- PostalCode = "34243",
- Country = "USA"
- }
- }
- ),
- IndexAction.Upload(
- new Hotel()
- {
- HotelId = "3",
- HotelName = "Triple Landscape Hotel",
- Description = "The Hotel stands out for its gastronomic excellence under the management of William Dough, who advises on and oversees all of the HotelΓÇÖs restaurant services.",
- DescriptionFr = "L'hôtel est situé dans une place du XIXe siècle, qui a été agrandie et rénovée aux plus hautes normes architecturales pour créer un hôtel moderne, fonctionnel et de première classe dans lequel l'art et les éléments historiques uniques coexistent avec le confort le plus moderne.",
- Category = "Resort and Spa",
- Tags = new[] { "air conditioning", "bar", "continental breakfast" },
- ParkingIncluded = true,
- LastRenovationDate = new DateTimeOffset(2015, 9, 20, 0, 0, 0, TimeSpan.Zero),
- Rating = 4.80,
- Address = new Address()
- {
- StreetAddress = "3393 Peachtree Rd",
- City = "Atlanta",
- StateProvince = "GA",
- PostalCode = "30326",
- Country = "USA"
- }
- }
- ),
- IndexAction.Upload(
- new Hotel()
- {
- HotelId = "4",
- HotelName = "Sublime Cliff Hotel",
- Description = "Sublime Cliff Hotel is located in the heart of the historic center of Sublime in an extremely vibrant and lively area within short walking distance to the sites and landmarks of the city and is surrounded by the extraordinary beauty of churches, buildings, shops and monuments. Sublime Cliff is part of a lovingly restored 1800 palace.",
- DescriptionFr = "Le sublime Cliff Hotel est situé au coeur du centre historique de sublime dans un quartier extrêmement animé et vivant, à courte distance de marche des sites et monuments de la ville et est entouré par l'extraordinaire beauté des églises, des bâtiments, des commerces et Monuments. Sublime Cliff fait partie d'un Palace 1800 restauré avec amour.",
- Category = "Boutique",
- Tags = new[] { "concierge", "view", "24-hour front desk service" },
- ParkingIncluded = true,
- LastRenovationDate = new DateTimeOffset(1960, 2, 06, 0, 0, 0, TimeSpan.Zero),
- Rating = 4.6,
- Address = new Address()
- {
- StreetAddress = "7400 San Pedro Ave",
- City = "San Antonio",
- StateProvince = "TX",
- PostalCode = "78216",
- Country = "USA"
- }
- }
- ),
- };
-
- var batch = IndexBatch.New(actions);
-
- try
- {
- indexClient.Documents.Index(batch);
- }
- catch (IndexBatchException e)
- {
- // When a service is under load, indexing might fail for some documents in the batch.
- // Depending on your application, you can compensate by delaying and retrying.
- // For this simple demo, we just log the failed document keys and continue.
- Console.WriteLine(
- "Failed to index some of the documents: {0}",
- String.Join(", ", e.IndexingResults.Where(r => !r.Succeeded).Select(r => r.Key)));
- }
-
- // Wait 2 seconds before starting queries
- Console.WriteLine("Waiting for indexing...\n");
- Thread.Sleep(2000);
- }
- ```
-
- Once you initialize the`IndexBatch` object, you can send it to the index by calling [`Documents.Index`](/dotnet/api/microsoft.azure.search.documentsoperationsextensions.index) on your [`SearchIndexClient`](/dotnet/api/microsoft.azure.search.searchindexclient) object. `Documents` is a property of `SearchIndexClient` that provides methods for adding, modifying, deleting, or querying documents in your index.
-
- The `try`/`catch` surrounding the call to the `Index` method catches indexing failures, which might happen if your service is under heavy load. In production code, you could delay and then retry indexing the documents that failed, or log and continue like the sample does, or handle it in some other way that meets your application's data consistency requirements.
-
- The 2-second delay compensates for indexing, which is asynchronous, so that all documents can be indexed before the queries are executed. Coding in a delay is typically only necessary in demos, tests, and sample applications.
-
-1. In Program.cs, in main, uncomment the lines for "2 - Load documents".
-
- ```csharp
- // Uncomment next 3 lines in "2 - Load documents"
- ISearchIndexClient indexClient = serviceClient.Indexes.GetClient(indexName);
- Console.WriteLine("{0}", "Uploading documents...\n");
- UploadDocuments(indexClient);
- ```
-1. Press F5 to rebuild the app.
-
- If the project builds successfully, a console window opens, writing status messages, this time with a message about uploading documents. In the Azure portal, in the search service **Overview** page, the hotels-quickstart index should now have 4 documents.
-
-For more information about document processing, see ["How the .NET SDK handles documents"](search-howto-dotnet-sdk-v10.md#how-dotnet-handles-documents).
-
-## 3 - Search an index
-
-You can get query results as soon as the first document is indexed, but actual testing of your index should wait until all documents are indexed.
-
-This section adds two pieces of functionality: query logic, and results. For queries, use the [`Search`](/dotnet/api/microsoft.azure.search.documentsoperationsextensions.search) method. This method takes search text as well as other [parameters](/dotnet/api/microsoft.azure.search.models.searchparameters).
-
-The [`DocumentsSearchResult`](/dotnet/api/microsoft.azure.search.models.documentsearchresult-1) class represents the results.
--
-1. In Program.cs, create a WriteDocuments method that prints search results to the console.
-
- ```csharp
- private static void WriteDocuments(DocumentSearchResult<Hotel> searchResults)
- {
- foreach (SearchResult<Hotel> result in searchResults.Results)
- {
- Console.WriteLine(result.Document);
- }
-
- Console.WriteLine();
- }
- ```
-
-1. Create a RunQueries method to execute queries and return results. Results are Hotel objects. You can use the select parameter to surface individual fields. If a field is not included in the select parameter, its corresponding Hotel property will be null.
-
- ```csharp
- private static void RunQueries(ISearchIndexClient indexClient)
- {
- SearchParameters parameters;
- DocumentSearchResult<Hotel> results;
-
- // Query 1
- Console.WriteLine("Query 1: Search for term 'Atlanta' with no result trimming");
- parameters = new SearchParameters();
- results = indexClient.Documents.Search<Hotel>("Atlanta", parameters);
- WriteDocuments(results);
-
- // Query 2
- Console.WriteLine("Query 2: Search on the term 'Atlanta', with trimming");
- Console.WriteLine("Returning only these fields: HotelName, Tags, Address:\n");
- parameters =
- new SearchParameters()
- {
- Select = new[] { "HotelName", "Tags", "Address" },
- };
- results = indexClient.Documents.Search<Hotel>("Atlanta", parameters);
- WriteDocuments(results);
-
- // Query 3
- Console.WriteLine("Query 3: Search for the terms 'restaurant' and 'wifi'");
- Console.WriteLine("Return only these fields: HotelName, Description, and Tags:\n");
- parameters =
- new SearchParameters()
- {
- Select = new[] { "HotelName", "Description", "Tags" }
- };
- results = indexClient.Documents.Search<Hotel>("restaurant, wifi", parameters);
- WriteDocuments(results);
-
- // Query 4 -filtered query
- Console.WriteLine("Query 4: Filter on ratings greater than 4");
- Console.WriteLine("Returning only these fields: HotelName, Rating:\n");
- parameters =
- new SearchParameters()
- {
- Filter = "Rating gt 4",
- Select = new[] { "HotelName", "Rating" }
- };
- results = indexClient.Documents.Search<Hotel>("*", parameters);
- WriteDocuments(results);
-
- // Query 5 - top 2 results
- Console.WriteLine("Query 5: Search on term 'boutique'");
- Console.WriteLine("Sort by rating in descending order, taking the top two results");
- Console.WriteLine("Returning only these fields: HotelId, HotelName, Category, Rating:\n");
- parameters =
- new SearchParameters()
- {
- OrderBy = new[] { "Rating desc" },
- Select = new[] { "HotelId", "HotelName", "Category", "Rating" },
- Top = 2
- };
- results = indexClient.Documents.Search<Hotel>("boutique", parameters);
- WriteDocuments(results);
- }
- ```
-
- There are two [ways of matching terms in a query](search-query-overview.md#types-of-queries): full-text search, and filters. A full-text search query searches for one or more terms in `IsSearchable` fields in your index. A filter is a boolean expression that is evaluated over `IsFilterable` fields in an index. You can use full-text search and filters together or separately.
-
- Both searches and filters are performed using the `Documents.Search` method. A search query can be passed in the `searchText` parameter, while a filter expression can be passed in the `Filter` property of the `SearchParameters` class. To filter without searching, just pass `"*"` for the `searchText` parameter. To search without filtering, just leave the `Filter` property unset, or do not pass in a `SearchParameters` instance at all.
-
-1. In Program.cs, in main, uncomment the lines for "3 - Search".
-
- ```csharp
- // Uncomment next 2 lines in "3 - Search an index"
- Console.WriteLine("{0}", "Searching documents...\n");
- RunQueries(indexClient);
- ```
-1. The solution is now finished. Press F5 to rebuild the app and run the program in its entirety.
-
- Output includes the same messages as before, with addition of query information and results.
-
-## Clean up resources
-
-When you're working in your own subscription, it's a good idea at the end of a project to identify whether you still need the resources you created. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
-
-You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-
-If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
-
-## Next steps
-
-In this C# quickstart, you worked through a series of tasks to create an index, load it with documents, and run queries. At different stages, we took shortcuts to simplify the code for readability and comprehension. If you are comfortable with the basic concepts, we recommend the next article for an exploration of alternative approaches and concepts that will deepen your knowledge.
-
-The sample code and index are expanded versions of this one. The next sample adds a Rooms collection, uses different classes and actions, and takes a closer look at how processing works.
-
-> [!div class="nextstepaction"]
-> [How to develop in .NET](search-howto-dotnet-sdk.md)
-
-Want to optimize and save on your cloud spending?
-
-> [!div class="nextstepaction"]
-> [Start analyzing costs with Cost Management](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn)
search Search Get Started Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-dotnet.md
Use the new [Azure.Search.Documents (version 11) client library](/dotnet/api/ove
You can [download the source code](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/quickstart/v11) to start with a finished project or follow the steps in this article to create your own. > [!NOTE]
-> Looking for an earlier version? See [Create a search index using Microsoft.Azure.Search v10](search-get-started-dotnet-v10.md) instead.
+> Looking for an earlier version? See [Create a search index using Microsoft.Azure.Search v10](/previous-versions/azure/search/search-get-started-dotnet-v10) instead.
## Prerequisites
search Search Howto Complex Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-complex-data-types.md
To get started, we recommend the [Hotels data set](https://github.com/Azure-Samp
> [!Note] > Support for complex types became generally available starting in `api-version=2019-05-06`. >
-> If your search solution is built on earlier workarounds of flattened datasets in a collection, you should change your index to include complex types as supported in the newest API version. For more information about upgrading API versions, see [Upgrade to the newest REST API version](search-api-migration.md) or [Upgrade to the newest .NET SDK version](search-dotnet-sdk-migration-version-9.md).
+> If your search solution is built on earlier workarounds of flattened datasets in a collection, you should change your index to include complex types as supported in the newest API version. For more information about upgrading API versions, see [Upgrade to the newest REST API version](search-api-migration.md) or [Upgrade to the newest .NET SDK version](/previous-versions/azure/search/search-dotnet-sdk-migration-version-9).
## Example of a complex structure
search Search Howto Dotnet Sdk V10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-dotnet-sdk-v10.md
- Title: Use Microsoft.Azure.Search (v10) in .NET-
-description: Learn how to create and manage search objects in a .NET application using C# and Microsoft.Azure.Search (version 10) of the .NET SDK. Code snippets demonstrate connecting to the service, creating indexes, and queries.
------ Previously updated : 10/27/2020--
-# How to use Microsoft.Azure.Search in a C# .NET application
-
-This article explains how to create and manage search objects using C# and the legacy client library, [**Microsoft.Azure.Search**](/dotnet/api/overview/azure/search/client10) (version 10) in the Azure SDK for .NET.
-
-Version 10 is the last version of the Microsoft.Azure.Search package. Moving forward, new features will be rolled out in [**Azure.Search.Documents**](/dotnet/api/overview/azure/search.documents-readme) from the Azure SDK team.
-
-> [!NOTE]
-> If you have existing or inflight development projects, you can continue to use version 10. For new projects, or to use new features, you should transition to the [new library](/dotnet/api/overview/azure/search.documents-readme).
-
-## About version 10
-
-The SDK consists of a few client libraries that enable you to manage your indexes, data sources, indexers, and synonym maps, as well as upload and manage documents, and execute queries, all without having to deal with the details of HTTP and JSON. These client libraries are all distributed as NuGet packages.
-
-The main NuGet package is `Microsoft.Azure.Search`, which is a meta-package that includes all the other packages as dependencies. Use this package if you're just getting started or if you know your application will need all the features of Azure Cognitive Search.
-
-The other NuGet packages in the SDK are:
-
- - `Microsoft.Azure.Search.Data`: Use this package if you're developing a .NET application using Azure Cognitive Search, and you only need to query or update documents in your indexes. If you also need to create or update indexes, synonym maps, or other service-level resources, use the `Microsoft.Azure.Search` package instead.
- - `Microsoft.Azure.Search.Service`: Use this package if you're developing automation in .NET to manage Azure Cognitive Search indexes, synonym maps, indexers, data sources, or other service-level resources. If you only need to query or update documents in your indexes, use the `Microsoft.Azure.Search.Data` package instead. If you need all the functionality of Azure Cognitive Search, use the `Microsoft.Azure.Search` package instead.
- - `Microsoft.Azure.Search.Common`: Common types needed by the Azure Cognitive Search .NET libraries. You do not need to use this package directly in your application. It is only meant to be used as a dependency.
-
-The various client libraries define classes like `Index`, `Field`, and `Document`, as well as operations like `Indexes.Create` and `Documents.Search` on the `SearchServiceClient` and `SearchIndexClient` classes. These classes are organized into the following namespaces:
-
-* [Microsoft.Azure.Search](/dotnet/api/microsoft.azure.search)
-* [Microsoft.Azure.Search.Models](/dotnet/api/microsoft.azure.search.models)
-
-If you would like to provide feedback for a future update of the SDK, see our [feedback page](https://feedback.azure.com/d365community/forum/9325d19e-0225-ec11-b6e6-000d3a4f07b8) or create an issue on [GitHub](https://github.com/azure/azure-sdk-for-net/issues) and mention "Azure Cognitive Search" in the issue title.
-
-The .NET SDK targets version [`2019-05-06` of the Azure Cognitive Search REST API](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/search/data-plane/Microsoft.Azure.Search.Dat) when indexing Azure Blobs.
-
-This SDK does not support [Management Operations](/rest/api/searchmanagement/) such as creating and scaling Search services and managing API keys. If you need to manage your Search resources from a .NET application, you can use the [Azure Cognitive Search .NET Management SDK](/dotnet/api/overview/azure/search/management).
-
-## Upgrade to v10
-If you're already using an older version of the Azure Cognitive Search .NET SDK and you'd like to upgrade to the latest generally available version, [this article](search-dotnet-sdk-migration-version-9.md) explains how.
-
-## SDK requirements
-1. Visual Studio 2017 or later.
-2. Your own Azure Cognitive Search service. In order to use the SDK, you will need the name of your service and one or more API keys. [Create a service in the portal](search-create-service-portal.md) will help you through these steps.
-3. Download the Azure Cognitive Search .NET SDK [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Search) by using "Manage NuGet Packages" in Visual Studio. Just search for the package name `Microsoft.Azure.Search` on NuGet.org (or one of the other package names above if you only need a subset of the functionality).
-
-The Azure Cognitive Search .NET SDK supports applications targeting the .NET Framework 4.5.2 and higher, as well as .NET Core 2.0 and higher.
-
-## Core scenarios
-There are several things you'll need to do in your search application. In this tutorial, we'll cover these core scenarios:
-
-* Creating an index
-* Populating the index with documents
-* Searching for documents using full-text search and filters
-
-The following sample code illustrates each of these scenarios. Feel free to use the code snippets in your own application.
-
-### Overview
-The sample application we'll be exploring creates a new index named "hotels", populates it with a few documents, then executes some search queries. Here is the main program, showing the overall flow:
-
-```csharp
-// This sample shows how to delete, create, upload documents and query an index
-static void Main(string[] args)
-{
- IConfigurationBuilder builder = new ConfigurationBuilder().AddJsonFile("appsettings.json");
- IConfigurationRoot configuration = builder.Build();
-
- SearchServiceClient serviceClient = CreateSearchServiceClient(configuration);
-
- string indexName = configuration["SearchIndexName"];
-
- Console.WriteLine("{0}", "Deleting index...\n");
- DeleteIndexIfExists(indexName, serviceClient);
-
- Console.WriteLine("{0}", "Creating index...\n");
- CreateIndex(indexName, serviceClient);
-
- ISearchIndexClient indexClient = serviceClient.Indexes.GetClient(indexName);
-
- Console.WriteLine("{0}", "Uploading documents...\n");
- UploadDocuments(indexClient);
-
- ISearchIndexClient indexClientForQueries = CreateSearchIndexClient(configuration);
-
- RunQueries(indexClientForQueries);
-
- Console.WriteLine("{0}", "Complete. Press any key to end application...\n");
- Console.ReadKey();
-}
-```
-
-> [!NOTE]
-> You can find the full source code of the sample application used in this walk through on [GitHub](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowTo).
->
->
-
-We'll walk through this step by step. First we need to create a new `SearchServiceClient`. This object allows you to manage indexes. In order to construct one, you need to provide your Azure Cognitive Search service name as well as an admin API key. You can enter this information in the `appsettings.json` file of the [sample application](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowTo).
-
-```csharp
-private static SearchServiceClient CreateSearchServiceClient(IConfigurationRoot configuration)
-{
- string searchServiceName = configuration["SearchServiceName"];
- string adminApiKey = configuration["SearchServiceAdminApiKey"];
-
- SearchServiceClient serviceClient = new SearchServiceClient(searchServiceName, new SearchCredentials(adminApiKey));
- return serviceClient;
-}
-```
-
-> [!NOTE]
-> If you provide an incorrect key (for example, a query key where an admin key was required), the `SearchServiceClient` will throw a `CloudException` with the error message "Forbidden" the first time you call an operation method on it, such as `Indexes.Create`. If this happens to you, double-check our API key.
->
->
-
-The next few lines call methods to create an index named "hotels", deleting it first if it already exists. We will walk through these methods a little later.
-
-```csharp
-Console.WriteLine("{0}", "Deleting index...\n");
-DeleteIndexIfExists(indexName, serviceClient);
-
-Console.WriteLine("{0}", "Creating index...\n");
-CreateIndex(indexName, serviceClient);
-```
-
-Next, the index needs to be populated. To do populate the index, we will need a `SearchIndexClient`. There are two ways to obtain one: by constructing it, or by calling `Indexes.GetClient` on the `SearchServiceClient`. We use the latter for convenience.
-
-```csharp
-ISearchIndexClient indexClient = serviceClient.Indexes.GetClient(indexName);
-```
-
-> [!NOTE]
-> In a typical search application, index management and population may be handled by a separate component from search queries. `Indexes.GetClient` is convenient for populating an index because it saves you the trouble of providing additional `SearchCredentials`. It does this by passing the admin key that you used to create the `SearchServiceClient` to the new `SearchIndexClient`. However, in the part of your application that executes queries, it is better to create the `SearchIndexClient` directly so that you can pass in a query key, which only allows you to read data, instead of an admin key. This is consistent with the principle of least privilege and will help to make your application more secure. You can find out more about admin keys and query keys [here](/rest/api/searchservice/#authentication-and-authorization).
->
->
-
-Now that we have a `SearchIndexClient`, we can populate the index. Index population is done by another method that we will walk through later.
-
-```csharp
-Console.WriteLine("{0}", "Uploading documents...\n");
-UploadDocuments(indexClient);
-```
-
-Finally, we execute a few search queries and display the results. This time we use a different `SearchIndexClient`:
-
-```csharp
-ISearchIndexClient indexClientForQueries = CreateSearchIndexClient(indexName, configuration);
-
-RunQueries(indexClientForQueries);
-```
-
-We will take a closer look at the `RunQueries` method later. Here is the code to create the new `SearchIndexClient`:
-
-```csharp
-private static SearchIndexClient CreateSearchIndexClient(string indexName, IConfigurationRoot configuration)
-{
- string searchServiceName = configuration["SearchServiceName"];
- string queryApiKey = configuration["SearchServiceQueryApiKey"];
-
- SearchIndexClient indexClient = new SearchIndexClient(searchServiceName, indexName, new SearchCredentials(queryApiKey));
- return indexClient;
-}
-```
-
-This time we use a query key since we do not need write access to the index. You can enter this information in the `appsettings.json` file of the [sample application](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowTo).
-
-If you run this application with a valid service name and API keys, the output should look like this example:
-(Some console output has been replaced with "..." for illustration purposes.)
-
-```output
-
-Deleting index...
-
-Creating index...
-
-Uploading documents...
-
-Waiting for documents to be indexed...
-
-Search the entire index for the term 'motel' and return only the HotelName field:
-
-Name: Secret Point Motel
-
-Name: Twin Dome Motel
--
-Apply a filter to the index to find hotels with a room cheaper than $100 per night, and return the hotelId and description:
-
-HotelId: 1
-Description: The hotel is ideally located on the main commercial artery of the city in the heart of New York. A few minutes away is Times Square and the historic centre of the city, as well as other places of interest that make New York one of America's most attractive and cosmopolitan cities.
-
-HotelId: 2
-Description: The hotel is situated in a nineteenth century plaza, which has been expanded and renovated to the highest architectural standards to create a modern, functional and first-class hotel in which art and unique historical elements coexist with the most modern comforts.
--
-Search the entire index, order by a specific field (lastRenovationDate) in descending order, take the top two results, and show only hotelName and lastRenovationDate:
-
-Name: Triple Landscape Hotel
-Last renovated on: 9/20/2015 12:00:00 AM +00:00
-
-Name: Twin Dome Motel
-Last renovated on: 2/18/1979 12:00:00 AM +00:00
--
-Search the hotel names for the term 'hotel':
-
-HotelId: 3
-Name: Triple Landscape Hotel
-...
-
-Complete. Press any key to end application...
-```
-
-The full source code of the application is provided at the end of this article.
-
-Next, we will take a closer look at each of the methods called by `Main`.
-
-### Creating an index
-After creating a `SearchServiceClient`, `Main` deletes the "hotels" index if it already exists. That deletion is done by the following method:
-
-```csharp
-private static void DeleteIndexIfExists(string indexName, SearchServiceClient serviceClient)
-{
- if (serviceClient.Indexes.Exists(indexName))
- {
- serviceClient.Indexes.Delete(indexName);
- }
-}
-```
-
-This method uses the given `SearchServiceClient` to check if the index exists, and if so, delete it.
-
-> [!NOTE]
-> The example code in this article uses the synchronous methods of the Azure Cognitive Search .NET SDK for simplicity. We recommend that you use the asynchronous methods in your own applications to keep them scalable and responsive. For example, in the method above you could use `ExistsAsync` and `DeleteAsync` instead of `Exists` and `Delete`.
->
->
-
-Next, `Main` creates a new "hotels" index by calling this method:
-
-```csharp
-private static void CreateIndex(string indexName, SearchServiceClient serviceClient)
-{
- var definition = new Index()
- {
- Name = indexName,
- Fields = FieldBuilder.BuildForType<Hotel>()
- };
-
- serviceClient.Indexes.Create(definition);
-}
-```
-
-This method creates a new `Index` object with a list of `Field` objects that defines the schema of the new index. Each field has a name, data type, and several attributes that define its search behavior. The `FieldBuilder` class uses reflection to create a list of `Field` objects for the index by examining the public properties and attributes of the given `Hotel` model class. We'll take a closer look at the `Hotel` class later on.
-
-> [!NOTE]
-> You can always create the list of `Field` objects directly instead of using `FieldBuilder` if needed. For example, you may not want to use a model class or you may need to use an existing model class that you don't want to modify by adding attributes.
->
->
-
-In addition to fields, you can also add scoring profiles, suggesters, or CORS options to the Index (these parameters are omitted from the sample for brevity). You can find more information about the Index object and its constituent parts in the [SDK reference](/dotnet/api/microsoft.azure.search.models.index), as well as in the [Azure Cognitive Search REST API reference](/rest/api/searchservice/).
-
-### Populating the index
-The next step in `Main` populates the newly-created index. This index population is done in the following method:
-(Some code replaced with "..." for illustration purposes. See the full sample solution for the full data population code.)
-
-```csharp
-private static void UploadDocuments(ISearchIndexClient indexClient)
-{
- var hotels = new Hotel[]
- {
- new Hotel()
- {
- HotelId = "1",
- HotelName = "Secret Point Motel",
- ...
- Address = new Address()
- {
- StreetAddress = "677 5th Ave",
- ...
- },
- Rooms = new Room[]
- {
- new Room()
- {
- Description = "Budget Room, 1 Queen Bed (Cityside)",
- ...
- },
- new Room()
- {
- Description = "Budget Room, 1 King Bed (Mountain View)",
- ...
- },
- new Room()
- {
- Description = "Deluxe Room, 2 Double Beds (City View)",
- ...
- }
- }
- },
- new Hotel()
- {
- HotelId = "2",
- HotelName = "Twin Dome Motel",
- ...
- {
- StreetAddress = "140 University Town Center Dr",
- ...
- },
- Rooms = new Room[]
- {
- new Room()
- {
- Description = "Suite, 2 Double Beds (Mountain View)",
- ...
- },
- new Room()
- {
- Description = "Standard Room, 1 Queen Bed (City View)",
- ...
- },
- new Room()
- {
- Description = "Budget Room, 1 King Bed (Waterfront View)",
- ...
- }
- }
- },
- new Hotel()
- {
- HotelId = "3",
- HotelName = "Triple Landscape Hotel",
- ...
- Address = new Address()
- {
- StreetAddress = "3393 Peachtree Rd",
- ...
- },
- Rooms = new Room[]
- {
- new Room()
- {
- Description = "Standard Room, 2 Queen Beds (Amenities)",
- ...
- },
- new Room ()
- {
- Description = "Standard Room, 2 Double Beds (Waterfront View)",
- ...
- },
- new Room()
- {
- Description = "Deluxe Room, 2 Double Beds (Cityside)",
- ...
- }
- }
- }
- };
-
- var batch = IndexBatch.Upload(hotels);
-
- try
- {
- indexClient.Documents.Index(batch);
- }
- catch (IndexBatchException e)
- {
- // Sometimes when your Search service is under load, indexing will fail for some of the documents in
- // the batch. Depending on your application, you can take compensating actions like delaying and
- // retrying. For this simple demo, we just log the failed document keys and continue.
- Console.WriteLine(
- "Failed to index some of the documents: {0}",
- String.Join(", ", e.IndexingResults.Where(r => !r.Succeeded).Select(r => r.Key)));
- }
-
- Console.WriteLine("Waiting for documents to be indexed...\n");
- Thread.Sleep(2000);
-}
-```
-
-This method has four parts. The first creates an array of 3 `Hotel` objects each with 3 `Room` objects that will serve as our input data to upload to the index. This data is hard-coded for simplicity. In your own application, your data will likely come from an external data source such as a SQL database.
-
-The second part creates an `IndexBatch` containing the documents. You specify the operation you want to apply to the batch at the time you create it, in this case by calling `IndexBatch.Upload`. The batch is then uploaded to the Azure Cognitive Search index by the `Documents.Index` method.
-
-> [!NOTE]
-> In this example, we are just uploading documents. If you wanted to merge changes into existing documents or delete documents, you could create batches by calling `IndexBatch.Merge`, `IndexBatch.MergeOrUpload`, or `IndexBatch.Delete` instead. You can also mix different operations in a single batch by calling `IndexBatch.New`, which takes a collection of `IndexAction` objects, each of which tells Azure Cognitive Search to perform a particular operation on a document. You can create each `IndexAction` with its own operation by calling the corresponding method such as `IndexAction.Merge`, `IndexAction.Upload`, and so on.
->
->
-
-The third part of this method is a catch block that handles an important error case for indexing. If your Azure Cognitive Search service fails to index some of the documents in the batch, an `IndexBatchException` is thrown by `Documents.Index`. This exception can happen if you are indexing documents while your service is under heavy load. **We strongly recommend explicitly handling this case in your code.** You can delay and then retry indexing the documents that failed, or you can log and continue like the sample does, or you can do something else depending on your application's data consistency requirements.
-
-> [!NOTE]
-> You can use the [`FindFailedActionsToRetry`](/dotnet/api/microsoft.azure.search.indexbatchexception.findfailedactionstoretry) method to construct a new batch containing only the actions that failed in a previous call to `Index`. There is a discussion of how to properly use it [on StackOverflow](https://stackoverflow.com/questions/40012885/azure-search-net-sdk-how-to-use-findfailedactionstoretry).
->
->
-
-Finally, the `UploadDocuments` method delays for two seconds. Indexing happens asynchronously in your Azure Cognitive Search service, so the sample application needs to wait a short time to ensure that the documents are available for searching. Delays like this are typically only necessary in demos, tests, and sample applications.
-
-<a name="how-dotnet-handles-documents"></a>
-
-#### How the .NET SDK handles documents
-You may be wondering how the Azure Cognitive Search .NET SDK is able to upload instances of a user-defined class like `Hotel` to the index. To help answer that question, let's look at the `Hotel` class:
-
-```csharp
-using System;
-using Microsoft.Azure.Search;
-using Microsoft.Azure.Search.Models;
-using Microsoft.Spatial;
-using Newtonsoft.Json;
-
-public partial class Hotel
-{
- [System.ComponentModel.DataAnnotations.Key]
- [IsFilterable]
- public string HotelId { get; set; }
-
- [IsSearchable, IsSortable]
- public string HotelName { get; set; }
-
- [IsSearchable]
- [Analyzer(AnalyzerName.AsString.EnLucene)]
- public string Description { get; set; }
-
- [IsSearchable]
- [Analyzer(AnalyzerName.AsString.FrLucene)]
- [JsonProperty("Description_fr")]
- public string DescriptionFr { get; set; }
-
- [IsSearchable, IsFilterable, IsSortable, IsFacetable]
- public string Category { get; set; }
-
- [IsSearchable, IsFilterable, IsFacetable]
- public string[] Tags { get; set; }
-
- [IsFilterable, IsSortable, IsFacetable]
- public bool? ParkingIncluded { get; set; }
-
- // SmokingAllowed reflects whether any room in the hotel allows smoking.
- // The JsonIgnore attribute indicates that a field should not be created
- // in the index for this property and it will only be used by code in the client.
- [JsonIgnore]
- public bool? SmokingAllowed => (Rooms != null) ? Array.Exists(Rooms, element => element.SmokingAllowed == true) : (bool?)null;
-
- [IsFilterable, IsSortable, IsFacetable]
- public DateTimeOffset? LastRenovationDate { get; set; }
-
- [IsFilterable, IsSortable, IsFacetable]
- public double? Rating { get; set; }
-
- public Address Address { get; set; }
-
- [IsFilterable, IsSortable]
- public GeographyPoint Location { get; set; }
-
- public Room[] Rooms { get; set; }
-}
-```
-
-The first thing to notice is that the name of each public property in the `Hotel` class will map to a field with the same name in the index definition. If you would like each field to start with a lower-case letter ("camel case"), you can tell the SDK to map the property names to camel-case automatically with the `[SerializePropertyNamesAsCamelCase]` attribute on the class. This scenario is common in .NET applications that perform data-binding where the target schema is outside the control of the application developer without having to violate the "Pascal case" naming guidelines in .NET.
-
-> [!NOTE]
-> The Azure Cognitive Search .NET SDK uses the [NewtonSoft JSON.NET](https://www.newtonsoft.com/json/help/html/Introduction.htm) library to serialize and deserialize your custom model objects to and from JSON. You can customize this serialization if needed. For more information, see [Custom Serialization with JSON.NET](#JsonDotNet).
->
->
-
-The second thing to notice is each property is decorated with attributes such as `IsFilterable`, `IsSearchable`, `Key`, and `Analyzer`. These attributes map directly to the [corresponding field attributes in an Azure Cognitive Search index](/rest/api/searchservice/create-index). The `FieldBuilder` class uses these properties to construct field definitions for the index.
-
-The third important thing about the `Hotel` class is the data types of the public properties. The .NET types of these properties map to their equivalent field types in the index definition. For example, the `Category` string property maps to the `category` field, which is of type `Edm.String`. There are similar type mappings between `bool?`, `Edm.Boolean`, `DateTimeOffset?`, and `Edm.DateTimeOffset` and so on. The specific rules for the type mapping are documented with the `Documents.Get` method in the [Azure Cognitive Search .NET SDK reference](/dotnet/api/microsoft.azure.search.documentsoperationsextensions.get). The `FieldBuilder` class takes care of this mapping for you, but it can still be helpful to understand in case you need to troubleshoot any serialization issues.
-
-Did you happen to notice the `SmokingAllowed` property?
-
-```csharp
-[JsonIgnore]
-public bool? SmokingAllowed => (Rooms != null) ? Array.Exists(Rooms, element => element.SmokingAllowed == true) : (bool?)null;
-```
-
-The `JsonIgnore` attribute on this property tells the `FieldBuilder` to not serialize it to the index as a field. This is a great way to create client-side calculated properties you can use as helpers in your application. In this case, the `SmokingAllowed` property reflects whether any `Room` in the `Rooms` collection allows smoking. If all are false, it indicates that the entire hotel does not allow smoking.
-
-Some properties such as `Address` and `Rooms` are instances of .NET classes. These properties represent more complex data structures and, as a result, require fields with a [complex data type](./search-howto-complex-data-types.md) in the index.
-
-The `Address` property represents a set of multiple values in the `Address` class, defined below:
-
-```csharp
-using System;
-using Microsoft.Azure.Search;
-using Microsoft.Azure.Search.Models;
-using Newtonsoft.Json;
-
-namespace AzureSearch.SDKHowTo
-{
- public partial class Address
- {
- [IsSearchable]
- public string StreetAddress { get; set; }
-
- [IsSearchable, IsFilterable, IsSortable, IsFacetable]
- public string City { get; set; }
-
- [IsSearchable, IsFilterable, IsSortable, IsFacetable]
- public string StateProvince { get; set; }
-
- [IsSearchable, IsFilterable, IsSortable, IsFacetable]
- public string PostalCode { get; set; }
-
- [IsSearchable, IsFilterable, IsSortable, IsFacetable]
- public string Country { get; set; }
- }
-}
-```
-
-This class contains the standard values used to describe addresses in the United States or Canada. You can use types like this to group logical fields together in the index.
-
-The `Rooms` property represents an array of `Room` objects:
-
-```csharp
-using System;
-using Microsoft.Azure.Search;
-using Microsoft.Azure.Search.Models;
-using Newtonsoft.Json;
-
-namespace AzureSearch.SDKHowTo
-{
- public partial class Room
- {
- [IsSearchable]
- [Analyzer(AnalyzerName.AsString.EnMicrosoft)]
- public string Description { get; set; }
-
- [IsSearchable]
- [Analyzer(AnalyzerName.AsString.FrMicrosoft)]
- [JsonProperty("Description_fr")]
- public string DescriptionFr { get; set; }
-
- [IsSearchable, IsFilterable, IsFacetable]
- public string Type { get; set; }
-
- [IsFilterable, IsFacetable]
- public double? BaseRate { get; set; }
-
- [IsSearchable, IsFilterable, IsFacetable]
- public string BedOptions { get; set; }
-
- [IsFilterable, IsFacetable]
- public int SleepsCount { get; set; }
-
- [IsFilterable, IsFacetable]
- public bool? SmokingAllowed { get; set; }
-
- [IsSearchable, IsFilterable, IsFacetable]
- public string[] Tags { get; set; }
- }
-}
-```
-
-Your data model in .NET and its corresponding index schema should be designed to support the search experience you'd like to give to your end user. Each top level object in .NET, ie document in the index, corresponds to a search result you would present in your user interface. For example, in a hotel search application your end users may want to search by hotel name, features of the hotel, or the characteristics of a particular room. We'll cover some query examples a little later.
-
-This ability to use your own classes to interact with documents in the index works in both directions; You can also retrieve search results and have the SDK automatically deserialize them to a type of your choice, as we will see in the next section.
-
-> [!NOTE]
-> The Azure Cognitive Search .NET SDK also supports dynamically-typed documents using the `Document` class, which is a key/value mapping of field names to field values. This is useful in scenarios where you don't know the index schema at design-time, or where it would be inconvenient to bind to specific model classes. All the methods in the SDK that deal with documents have overloads that work with the `Document` class, as well as strongly-typed overloads that take a generic type parameter. Only the latter are used in the sample code in this tutorial. The [`Document` class](/dotnet/api/microsoft.azure.search.models.document) inherits from `Dictionary<string, object>`.
->
->
-
-**Why you should use nullable data types**
-
-When designing your own model classes to map to an Azure Cognitive Search index, we recommend declaring properties of value types such as `bool` and `int` to be nullable (for example, `bool?` instead of `bool`). If you use a non-nullable property, you have to **guarantee** that no documents in your index contain a null value for the corresponding field. Neither the SDK nor the Azure Cognitive Search service will help you to enforce this.
-
-This is not just a hypothetical concern: Imagine a scenario where you add a new field to an existing index that is of type `Edm.Int32`. After updating the index definition, all documents will have a null value for that new field (since all types are nullable in Azure Cognitive Search). If you then use a model class with a non-nullable `int` property for that field, you will get a `JsonSerializationException` like this when trying to retrieve documents:
-
-```output
-Error converting value {null} to type 'System.Int32'. Path 'IntValue'.
-```
-
-For this reason, we recommend that you use nullable types in your model classes as a best practice.
-
-<a name="JsonDotNet"></a>
-
-#### Custom Serialization with JSON.NET
-The SDK uses JSON.NET for serializing and deserializing documents. You can customize serialization and deserialization if needed by defining your own `JsonConverter` or `IContractResolver`. For more information, see the [JSON.NET documentation](https://www.newtonsoft.com/json/help/html/Introduction.htm). This can be useful when you want to adapt an existing model class from your application for use with Azure Cognitive Search, and other more advanced scenarios. For example, with custom serialization you can:
-
-* Include or exclude certain properties of your model class from being stored as document fields.
-* Map between property names in your code and field names in your index.
-* Create custom attributes that can be used for mapping properties to document fields.
-
-You can find examples of implementing custom serialization in the unit tests for the Azure Cognitive Search .NET SDK on GitHub. A good starting point is [this folder](https://github.com/Azure/azure-sdk-for-net/tree/4f6f4e4c90200c1b0621c4cead302a91e89f2aba/sdk/search/Microsoft.Azure.Search/tests/Tests/Models). It contains classes that are used by the custom serialization tests.
-
-### Searching for documents in the index
-The last step in the sample application is to search for some documents in the index:
-
-```csharp
-private static void RunQueries(ISearchIndexClient indexClient)
-{
- SearchParameters parameters;
- DocumentSearchResult<Hotel> results;
-
- Console.WriteLine("Search the entire index for the term 'motel' and return only the HotelName field:\n");
-
- parameters =
- new SearchParameters()
- {
- Select = new[] { "HotelName" }
- };
-
- results = indexClient.Documents.Search<Hotel>("motel", parameters);
-
- WriteDocuments(results);
-
- Console.Write("Apply a filter to the index to find hotels with a room cheaper than $100 per night, ");
- Console.WriteLine("and return the hotelId and description:\n");
-
- parameters =
- new SearchParameters()
- {
- Filter = "Rooms/any(r: r/BaseRate lt 100)",
- Select = new[] { "HotelId", "Description" }
- };
-
- results = indexClient.Documents.Search<Hotel>("*", parameters);
-
- WriteDocuments(results);
-
- Console.Write("Search the entire index, order by a specific field (lastRenovationDate) ");
- Console.Write("in descending order, take the top two results, and show only hotelName and ");
- Console.WriteLine("lastRenovationDate:\n");
-
- parameters =
- new SearchParameters()
- {
- OrderBy = new[] { "LastRenovationDate desc" },
- Select = new[] { "HotelName", "LastRenovationDate" },
- Top = 2
- };
-
- results = indexClient.Documents.Search<Hotel>("*", parameters);
-
- WriteDocuments(results);
-
- Console.WriteLine("Search the entire index for the term 'hotel':\n");
-
- parameters = new SearchParameters();
- results = indexClient.Documents.Search<Hotel>("hotel", parameters);
-
- WriteDocuments(results);
-}
-```
-
-Each time it executes a query, this method first creates a new `SearchParameters` object. This object is used to specify additional options for the query such as sorting, filtering, paging, and faceting. In this method, we're setting the `Filter`, `Select`, `OrderBy`, and `Top` property for different queries. All the `SearchParameters` properties are documented [here](/dotnet/api/microsoft.azure.search.models.searchparameters).
-
-The next step is to actually execute the search query. Running the search is done using the `Documents.Search` method. For each query, we pass the search text to use as a string (or `"*"` if there is no search text), plus the search parameters created earlier. We also specify `Hotel` as the type parameter for `Documents.Search`, which tells the SDK to deserialize documents in the search results into objects of type `Hotel`.
-
-> [!NOTE]
-> You can find more information about the search query expression syntax [here](/rest/api/searchservice/Simple-query-syntax-in-Azure-Search).
->
->
-
-Finally, after each query this method iterates through all the matches in the search results, printing each document to the console:
-
-```csharp
-private static void WriteDocuments(DocumentSearchResult<Hotel> searchResults)
-{
- foreach (SearchResult<Hotel> result in searchResults.Results)
- {
- Console.WriteLine(result.Document);
- }
-
- Console.WriteLine();
-}
-```
-
-Let's take a closer look at each of the queries in turn. Here is the code to execute the first query:
-
-```csharp
-parameters =
- new SearchParameters()
- {
- Select = new[] { "HotelName" }
- };
-
-results = indexClient.Documents.Search<Hotel>("motel", parameters);
-
-WriteDocuments(results);
-```
-
-In this case, we're searching the entire index for the word "motel" in any searchable field and we only want to retrieve the hotel names, as specified by the `Select` parameter. Here are the results:
-
-```output
-Name: Secret Point Motel
-
-Name: Twin Dome Motel
-```
-
-The next query is a little more interesting. We want to find any hotels that have a room with a nightly rate of less than $100 and return only the hotel ID and description:
-
-```csharp
-parameters =
- new SearchParameters()
- {
- Filter = "Rooms/any(r: r/BaseRate lt 100)",
- Select = new[] { "HotelId", "Description" }
- };
-
-results = indexClient.Documents.Search<Hotel>("*", parameters);
-
-WriteDocuments(results);
-```
-
-This query uses an OData `$filter` expression, `Rooms/any(r: r/BaseRate lt 100)`, to filter the documents in the index. This uses the [any operator](./search-query-odata-collection-operators.md) to apply the 'BaseRate lt 100' to every item in the Rooms collection. You can find out more about the OData syntax that Azure Cognitive Search supports [here](./query-odata-filter-orderby-syntax.md).
-
-Here are the results of the query:
-
-```output
-HotelId: 1
-Description: The hotel is ideally located on the main commercial artery of the city in the heart of New York...
-
-HotelId: 2
-Description: The hotel is situated in a nineteenth century plaza, which has been expanded and renovated to...
-```
-
-Next, we want to find the top two hotels that have been most recently renovated, and show the hotel name and last renovation date. Here is the code:
-
-```csharp
-parameters =
- new SearchParameters()
- {
- OrderBy = new[] { "LastRenovationDate desc" },
- Select = new[] { "HotelName", "LastRenovationDate" },
- Top = 2
- };
-
-results = indexClient.Documents.Search<Hotel>("*", parameters);
-
-WriteDocuments(results);
-```
-
-In this case, we again use OData syntax to specify the `OrderBy` parameter as `lastRenovationDate desc`. We also set `Top` to 2 to ensure we only get the top two documents. As before, we set `Select` to specify which fields should be returned.
-
-Here are the results:
-
-```output
-Name: Fancy Stay Last renovated on: 6/27/2010 12:00:00 AM +00:00
-Name: Roach Motel Last renovated on: 4/28/1982 12:00:00 AM +00:00
-```
-
-Finally, we want to find all hotels names that match the word "hotel":
-
-```csharp
-parameters = new SearchParameters()
-{
- SearchFields = new[] { "HotelName" }
-};
-results = indexClient.Documents.Search<Hotel>("hotel", parameters);
-
-WriteDocuments(results);
-```
-
-And here are the results, which include all fields since we did not specify the `Select` property:
-
-```output
- HotelId: 3
- Name: Triple Landscape Hotel
- ...
-```
-
-This step completes the tutorial, but don't stop here. **Next steps provide additional resources for learning more about Azure Cognitive Search.
-
-## Next steps
-* Browse the references for the [.NET SDK](/dotnet/api/microsoft.azure.search) and [REST API](/rest/api/searchservice/).
-* Review [naming conventions](/rest/api/searchservice/Naming-rules) to learn the rules for naming various objects.
-* Review [supported data types](/rest/api/searchservice/Supported-data-types) in Azure Cognitive Search.
security Physical Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/physical-security.md
Microsoft requires visitors to surrender badges upon departure from any Microsof
Periodically, we conduct physical security reviews of the facilities, to ensure the datacenters properly address Azure security requirements. The datacenter hosting provider personnel do not provide Azure service management. Personnel can't sign in to Azure systems and don't have physical access to the Azure collocation room and cages. ## Data bearing devices
-Microsoft uses best practice procedures and a wiping solution that is [NIST 800-88 compliant](https://csrc.nist.gov/publications/detail/sp/800-88/archive/2006-09-01). For hard drives that canΓÇÖt be wiped, we use a destruction process that destroys it and renders the recovery of information impossible. This destruction process can be to disintegrate, shred, pulverize, or incinerate. We determine the means of disposal according to the asset type. We retain records of the destruction.
+Microsoft uses best practice procedures and a wiping solution that is [NIST 800-88 compliant](https://csrc.nist.gov/publications/detail/sp/800-88/rev-1/final). For hard drives that canΓÇÖt be wiped, we use a destruction process that destroys it and renders the recovery of information impossible. This destruction process can be to disintegrate, shred, pulverize, or incinerate. We determine the means of disposal according to the asset type. We retain records of the destruction.
## Equipment disposal Upon a system's end-of-life, Microsoft operational personnel follow rigorous data handling and hardware disposal procedures to assure that hardware containing your data is not made available to untrusted parties. We use a secure erase approach for hard drives that support it. For hard drives that canΓÇÖt be wiped, we use a destruction process that destroys the drive and renders the recovery of information impossible. This destruction process can be to disintegrate, shred, pulverize, or incinerate. We determine the means of disposal according to the asset type. We retain records of the destruction. All Azure services use approved media storage and disposal management services.
sentinel Threat Intelligence Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md
To connect to TAXII threat intelligence feeds, follow the instructions to [conne
- [Learn more about Sectrio integration](https://sectrio.com/threat-intelligence/) - [Step by step process for integrating Sectrio's TI feed into Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-sentinel-bring-threat-intelligence-from-sectrio-using/ba-p/2964648)
+### SEKOIA.IO
+
+- [Learn about SEKOIA.IO integration with Microsoft Sentinel](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/bring-threat-intelligence-from-sekoia-io-using-taxii-data/ba-p/3302497)
+ ## Integrated threat intelligence platform products To connect to Threat Intelligence Platform (TIP) feeds, follow the instructions to [connect Threat Intelligence platforms to Microsoft Sentinel](connect-threat-intelligence-tip.md). The second part of these instructions calls for you to enter information into your TIP solution. See the links below for more information.
service-bus-messaging Transport Layer Security Audit Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/transport-layer-security-audit-minimum-version.md
To create a policy with an audit effect for the minimum TLS version with the Azu
}, { "not": {
- "field": " Microsoft.ServiceBus/namespaces/minimumTlsVersion",
+ "field": "Microsoft.ServiceBus/namespaces/minimumTlsVersion",
"equals": "1.2" } }
To create a policy with a deny effect for a minimum TLS version that is less tha
"allOf": [ { "field": "type",
- "equals": " Microsoft.ServiceBus/namespaces"
+ "equals": "Microsoft.ServiceBus/namespaces"
}, { "not": {
- "field": " Microsoft.ServiceBus/namespaces/minimumTlsVersion",
+ "field": "Microsoft.ServiceBus/namespaces/minimumTlsVersion",
"equals": "1.2" } }
See the following documentation for more information.
- [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a Service Bus namespace](transport-layer-security-enforce-minimum-version.md) - [Configure the minimum TLS version for a Service Bus namespace](transport-layer-security-configure-minimum-version.md)-- [Configure Transport Layer Security (TLS) for a Service Bus client application](transport-layer-security-configure-client-version.md)
+- [Configure Transport Layer Security (TLS) for a Service Bus client application](transport-layer-security-configure-client-version.md)
site-recovery Azure To Azure Troubleshoot Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-network-connectivity.md
Try to access the DNS server from the virtual machine. If the DNS server isn't a
### Issue 2: Site Recovery configuration failed (151196) > [!NOTE]
-> If the VMs are behind a **Standard** internal load balancer, by default, it wouldn't have access to the Microsoft 365 IPs such as `login.microsoftonline.com`. For outbound access create an Azure NAT gateway. For more information see [Tutorial: Create a NAT gateway - Azure CLI](../virtual-network/nat-gateway/tutorial-create-nat-gateway-cli.md).
+> If the VMs are behind a **Standard** internal load balancer, by default, it wouldn't have access to the Microsoft 365 IPs such as `login.microsoftonline.com`. For outbound access create an Azure NAT gateway. For more information see [Quickstart: Create a NAT gateway - Azure CLI](../virtual-network/nat-gateway/quickstart-create-nat-gateway-cli.md).
#### Possible cause
To allow [the required URLs](azure-to-azure-about-networking.md#outbound-connect
## Next steps
-[Replicate Azure VMs to another Azure region](azure-to-azure-how-to-enable-replication.md)
+[Replicate Azure VMs to another Azure region](azure-to-azure-how-to-enable-replication.md)
spring-cloud How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-github-actions.md
jobs:
runs-on: ubuntu-latest name: deploy to production with artifact steps:
- - name: Checkout Github Action
+ - name: Checkout GitHub Action
uses: actions/checkout@v2 - name: Set up JDK 1.8
jobs:
runs-on: ubuntu-latest name: deploy to production with soruce code steps:
- - name: Checkout Github Action
+ - name: Checkout GitHub Action
uses: actions/checkout@v2 - name: Login via Azure CLI
spring-cloud How To Migrate Standard Tier To Enterprise Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-migrate-standard-tier-to-enterprise-tier.md
Use the following steps to create and configure an application using Spring Clou
1. Assign a public endpoint to the gateway to access it.
- # [Azure [portal](#tab/azure-portal)
+ # [Azure portal](#tab/azure-portal)
1. Select the **Spring Cloud Gateway** section, then select **Overview** to view the running state and resources given to Spring Cloud Gateway and its operator.
spring-cloud How To Service Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-service-registration.md
Title: Discover and register your Spring Boot applications in Azure Spring Cloud description: Discover and register your Spring Boot applications with managed Spring Cloud Service Registry (OSS) in Azure Spring Cloud + Previously updated : 09/08/2020- Last updated : 05/09/2022 zone_pivot_groups: programming-languages-spring-cloud
zone_pivot_groups: programming-languages-spring-cloud
**This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier
-Service registration and discovery are key requirements for maintaining a list of live app instances to call, and routing and load balancing inbound requests. Configuring each client manually takes time and introduces the possibility of human error. Managed Spring Cloud Service Registry (OSS) in Azure Spring Cloud solves this problem. Once configured, a Service Registry server will control service registration and discovery for your applications. The Service Registry server maintains a registry of live app instances, enables client-side load-balancing, and decouples service providers from clients without relying on DNS.
+This article shows you how to register your application using Spring Cloud Service Registry.
+
+Service registration and discovery are key requirements for maintaining a list of live app instances to call, and routing and load balancing inbound requests. Configuring each client manually takes time and introduces the possibility of human error. Azure Spring Cloud provides two options for you to solve this problem:
+
+* Use Kubernetes Service Discovery approach to invoke calls among your apps.
+
+ Azure Spring Cloud creates a corresponding kubernetes service for every app running in it using app name as the kubernetes service name. So you can invoke calls in one app to another app by using app name in a http/https request like http(s)://{app name}/path. And this approach is also suitable for Enterprise tier.
+
+* Use Managed Spring Cloud Service Registry (OSS) in Azure Spring Cloud.
+
+ After configuration, a Service Registry server will control service registration and discovery for your applications. The Service Registry server maintains a registry of live app instances, enables client-side load-balancing, and decouples service providers from clients without relying on DNS.
::: zone pivot="programming-language-csharp"+ For information about how to set up service registration for a Steeltoe app, see [Prepare a Java Spring application for deployment in Azure Spring Cloud](how-to-prepare-app-deployment.md).+ ::: zone-end+ ::: zone pivot="programming-language-java" ## Register your application using Spring Cloud Service Registry
-Before your application can manage service registration and discovery using Spring Cloud Service Registry, you must include the dependency for *spring-cloud-starter-netflix-eureka-client* to your *pom.xml*
+Before your application can manage service registration and discovery using Spring Cloud Service Registry, you must include the following dependency for *spring-cloud-starter-netflix-eureka-client* to your *pom.xml*:
```xml
- <dependency>
- <groupId>org.springframework.cloud</groupId>
- <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
- </dependency>
+<dependency>
+ <groupId>org.springframework.cloud</groupId>
+ <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
+</dependency>
``` ## Update the top level class
-Finally, we add an annotation to the top level class of your application
+Finally, add an annotation to the top level class of your application as shown in the following example:
- ```java
- package foo.bar;
+```java
+package foo.bar;
- import org.springframework.boot.SpringApplication;
- import org.springframework.boot.autoconfigure.SpringBootApplication;
- import org.springframework.cloud.netflix.eureka.EnableEurekaClient;
+import org.springframework.boot.SpringApplication;
+import org.springframework.boot.autoconfigure.SpringBootApplication;
+import org.springframework.cloud.netflix.eureka.EnableEurekaClient;
- @SpringBootApplication
- @EnableEurekaClient
- public class DemoApplication {
+@SpringBootApplication
+@EnableEurekaClient
+public class DemoApplication {
- public static void main(String[] args) {
- SpringApplication.run(DemoApplication.class, args);
- }
+ public static void main(String[] args) {
+ SpringApplication.run(DemoApplication.class, args);
}
- ```
+}
+```
The Spring Cloud Service Registry server endpoint will be injected as an environment variable in your application. Applications will now be able to register themselves with the Service Registry server and discover other dependent applications.
-Note that it can take a few minutes for the changes to propagate from the server to all applications.
+> [!NOTE]
+> It can take a few minutes for the changes to propagate from the server to all applications.
::: zone-end
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
The items that appear in these tables will change over time as support continues
| [Custom domains](storage-custom-domain-name.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | | [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Customer-managed keys (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
| [Data redundancy options](../common/storage-redundancy.md?toc=/azure/storage/blobs/toc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Encryption scopes](encryption-scope-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Immutable storage](immutable-storage-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
The items that appear in these tables will change over time as support continues
| [Custom domains](storage-custom-domain-name.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | | [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Customer-managed keys (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
| [Data redundancy options](../common/storage-redundancy.md?toc=/azure/storage/blobs/toc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | | [Encryption scopes](encryption-scope-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | | [Immutable storage](immutable-storage-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
When planning for disaster recovery during a regional outage, you should create
### Enabling access to virtual networks in other regions (preview)
-To enable access from a virtual network that is located in another region, register the `AllowGlobalTagsForStorage` feature in the subscription of the virtual network. Subnets in other regions which have storage service endpoints will no longer use a public IP address to communicate with the storage account. All traffic will originate from a private IP address and any IP network rules that permit traffic from those subnets will no longer have an effect.
+To enable access from a virtual network that is located in another region, register the `AllowGlobalTagsForStorage` feature in the subscription of the virtual network. All the subnets in the subscription that has the _AllowedGlobalTagsForStorage_ feature enabled will no longer use a public IP address to communicate with any storage account. Instead, all the traffic from these subnets to storage accounts will use a private IP address as a source IP. As a result, any storage accounts that use IP network rules to permit traffic from those subnets will no longer have an effect.
> [!IMPORTANT] > This capability is currently in PREVIEW.
storage Storage Use Azcopy Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-files.md
This section contains the following examples:
> - Download specific files > [!TIP]
-> You can tweak your download operation by using optional flags. Here's a few examples.
+> You can tweak your download operation by using optional flags. Here are a few examples:
> > |Scenario|Flag| > |||
See these articles to configure settings, optimize performance, and troubleshoot
- [AzCopy configuration settings](storage-ref-azcopy-configuration-settings.md) - [Optimize the performance of AzCopy](storage-use-azcopy-optimize.md)-- [Troubleshoot AzCopy V10 issues in Azure Storage by using log files](storage-use-azcopy-configure.md)
+- [Troubleshoot AzCopy V10 issues in Azure Storage by using log files](storage-use-azcopy-configure.md)
storage File Sync Resource Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-resource-move.md
Title: Azure File Sync resource moves and topology changes description: Learn how to move sync resources across resource groups, subscriptions, and Azure Active Directory (AAD) tenants.-+ Last updated 04/13/2021-+
When planning to make changes to the Azure File Sync cloud resources, it's impor
* :::image type="icon" source="media/storage-sync-resource-move/storage-sync-resource-move-cloud-endpoint.png" border="false"::: Cloud endpoint * :::image type="icon" source="media/storage-sync-resource-move/storage-sync-resource-move-server-endpoint.png" border="false"::: Server endpoint
-In Azure File Sync, the only resource capable of moving is the Storage Sync Service resource. Any subresources are bound to its parent and cannot move to another Storage Sync Service.
+In Azure File Sync, the only resource capable of moving is the Storage Sync Service resource. Any subresources are bound to its parent and can't move to another Storage Sync Service.
**Azure storage resources (in hierarchical order)** * :::image type="icon" source="media/storage-sync-resource-move/storage-sync-resource-move-storage-account.png" border="false"::: Storage account * :::image type="icon" source="media/storage-sync-resource-move/storage-sync-resource-move-file-share.png" border="false"::: File share
-In Azure Storage, the only resource capable of moving is the storage account. An Azure file share, as a subresource, cannot move to a different storage account.
+In Azure Storage, the only resource capable of moving is the storage account. An Azure file share, as a subresource, can't move to a different storage account.
## Supported combinations
When planning your resource move, there are different considerations for [moving
### Move to a new Azure Active Directory tenant
-Individual resources like a Storage Sync Service or storage accounts, cannot move by themselves to a different AAD tenant. Only Azure subscriptions can move AAD tenants. Think about your subscription structure in the new AAD tenant. You can use a dedicated subscription for Azure File Sync.
+Individual resources like a Storage Sync Service or storage accounts, can't move by themselves to a different AAD tenant. Only Azure subscriptions can move AAD tenants. Think about your subscription structure in the new AAD tenant. You can use a dedicated subscription for Azure File Sync.
1. Create an Azure subscription (or determine an existing one in the old tenant that should move. 1. [Perform a subscription move within the same AAD tenant](#move-within-the-same-azure-active-directory-tenant) of your Storage Sync Service and all associated storage accounts. 1. Sync will stop. Complete your tenant move immediately or [restore sync's ability to access the storage accounts that moved](#azure-file-sync-storage-access-authorization). You can then move to the new AAD tenant later.
-Once all related Azure File Sync resources have been sequestered into their own subscription, you are ready to move the entire subscription to the target AAD tenant. The [transfer subscription guide](../../role-based-access-control/transfer-subscription.md) allows you to plan and execute such a transfer.
+Once all related Azure File Sync resources have been sequestered into their own subscription, you're ready to move the entire subscription to the target AAD tenant. The [transfer subscription guide](../../role-based-access-control/transfer-subscription.md) allows you to plan and execute such a transfer.
> [!WARNING] > When you transfer a subscription from one tenant to another, sync will stop immediately. You have to manually authorize sync to access the relevant storage accounts in the new subscription. The [Azure File Sync storage access authorization](#azure-file-sync-storage-access-authorization) section will provide the necessary steps.
Once all related Azure File Sync resources have been sequestered into their own
## Azure File Sync storage access authorization
-When storage accounts are moved to either a new subscription or are moved within a subscription to a new Azure Active Directory (AAD) tenant, sync will stop. Role-based access (RBAC) is used to authorize Azure File Sync to access a storage account and these role assignments are not migrated with the resources.
+When storage accounts are moved to either a new subscription or are moved within a subscription to a new Azure Active Directory (AAD) tenant, sync will stop. Role-based access control (RBAC) is used to authorize Azure File Sync to access a storage account, and these role assignments are not migrated with the resources.
### Azure File Sync service principal
This assignment is typically done automatically through the user context of the
## Move to a different Azure region
-The Azure File Sync resource *Storage Sync Service* and the storage accounts that contain file shares that are syncing, have an Azure region they are deployed in. You determine that region when you create a resource. The region of the Storage Sync Service and storage account resources must match. These regions cannot be changed on either resource type after their creation.
+The Azure File Sync resource *Storage Sync Service* and the storage accounts that contain file shares that are syncing, have an Azure region they are deployed in. You determine that region when you create a resource. The region of the Storage Sync Service and storage account resources must match. These regions can't be changed on either resource type after their creation.
-Assigning a different region to a resource is different from a [region fail-over](#region-fail-over), which can be supported, depending on your storage account redundancy setting.
+Assigning a different region to a resource is different from a [region fail-over](#region-fail-over), which can be supported depending on your storage account redundancy setting.
## Region fail-over
-[Azure storage offers geo-redundancy options](../common/storage-redundancy.md#geo-redundant-storage) for a storage account. These redundancy options can pose problems for storage accounts used with Azure File Sync. The main reason is that replication between geographically distant regions is not performed by Azure File Sync, but by a storage replication technology built-in to the storage subsystem in Azure. It cannot have an understanding of application state and Azure File Sync is an application with files syncing to and from Azure file shares at any given moment. If you opt for any of these geographically disbursed storage redundancy options, you won't lose all of your data in a large-scale disaster. However, you need to [anticipate data loss](../common/storage-disaster-recovery-guidance.md#anticipate-data-loss).
+[Azure storage offers geo-redundancy options](../common/storage-redundancy.md#geo-redundant-storage) for a storage account. These redundancy options can pose problems for storage accounts used with Azure File Sync. The main reason is that replication between geographically distant regions is not performed by Azure File Sync, but by a storage replication technology built-in to the storage subsystem in Azure. It can't have an understanding of application state and Azure File Sync is an application with files syncing to and from Azure file shares at any given moment. If you opt for any of these geographically disbursed storage redundancy options, you won't lose all of your data in a large-scale disaster. However, you need to [anticipate data loss](../common/storage-disaster-recovery-guidance.md#anticipate-data-loss).
> [!CAUTION]
-> Fail-over is never an appropriate substitute to provisioning your resources in the correct Azure region. If your resources are in the "wrong" region, you need to consider stopping sync and setting sync up again to new Azure file shares that are deployed in your desired region.
+> Failover is never an appropriate substitute to provisioning your resources in the correct Azure region. If your resources are in the "wrong" region, you need to consider stopping sync and setting sync up again to new Azure file shares that are deployed in your desired region.
-A regional fail-over can be started by Microsoft in a catastrophic event that will render data centers in an Azure region incapacitated for an extended period of time. The definition of downtime your business can sustain might be less than the time Microsoft is prepared to let pass before starting a regional fail-over. For a situation like that, [fail-overs can also be initiated by customers](../common/storage-initiate-account-failover.md).
+A regional failover can be started by Microsoft in a catastrophic event that will render data centers in an Azure region incapacitated for an extended period of time. The definition of downtime your business can sustain might be less than the time Microsoft is prepared to let pass before starting a regional fail-over. For a situation like that, [failovers can also be initiated by customers](../common/storage-initiate-account-failover.md).
> [!IMPORTANT]
-> In the event of a fail-over, you need to file a support ticket for your impacted Storage Sync Services for sync to work again.
+> In the event of a failover, you need to file a support ticket for your impacted Storage Sync Services for sync to work again.
## See also
storage File Sync Server Endpoint Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-server-endpoint-create.md
Title: Create an Azure File Sync server endpoint description: Understand the options during server endpoint creation and how to best apply them to your situation.-+ Last updated 06/01/2021-+
storage File Sync Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-troubleshoot.md
Title: Troubleshoot Azure File Sync | Microsoft Docs description: Troubleshoot common issues in a deployment on Azure File Sync, which you can use to transform Windows Server into a quick cache of your Azure file share.-+ Last updated 11/2/2021-+
storage Storage Files Netapp Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-netapp-comparison.md
Title: Azure Files and Azure NetApp Files Comparison | Microsoft Docs description: Comparison of Azure Files and Azure NetApp Files.-+ Last updated 05/04/2022-+ # Azure Files and Azure NetApp Files comparison
storage Storage Troubleshoot Linux File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-linux-file-connection-problems.md
Title: Troubleshoot Azure Files problems in Linux | Microsoft Docs description: Troubleshooting Azure Files problems in Linux. See common issues related to Azure Files when you connect from Linux clients, and see possible resolutions.-+ Last updated 10/16/2018-+ # Troubleshoot Azure Files problems in Linux (SMB)
storage Storage Troubleshoot Windows File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-windows-file-connection-problems.md
Title: Troubleshoot Azure Files problems in Windows description: Troubleshooting Azure Files problems in Windows. See common issues related to Azure Files when you connect from Windows clients, and see possible resolutions. Only for SMB shares-+ Last updated 01/31/2022-+
storage Storage Troubleshooting Files Nfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshooting-files-nfs.md
Title: Troubleshoot Azure NFS file share problems - Azure Files description: Troubleshoot Azure NFS file share problems.-+ Last updated 09/15/2020-+
Azure Files disallows alphanumeric UID/GID. So idmapping must be disabled.
Even if idmapping has been correctly disabled, the settings for disabling idmapping gets overridden in some cases. For example, when the Azure Files encounters a bad file name, it sends back an error. Upon seeing this particular error code, NFS v 4.1 Linux client decides to re-enable idmapping and the future requests are sent again with alphanumeric UID/GID. For a list of unsupported characters on Azure Files, see this [article](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata). Colon is one of the unsupported characters. ### Workaround
-Check that idmapping is disabled and nothing is re-enabling it, then perform the following:
+Check that idmapping is disabled and nothing is re-enabling it, then perform the following steps:
- Unmount the share-- Disable id-mapping with # echo Y > /sys/module/nfs/parameters/nfs4_disable_idmapping
+- Disable idmapping with # echo Y > /sys/module/nfs/parameters/nfs4_disable_idmapping
- Mount the share back - If running rsync, run rsync with the "ΓÇönumeric-ids" argument from a directory that does not have a bad dir/file name.
The following diagram depicts connectivity using public endpoints.
### Cause 2: Secure transfer required is enabled
-Double encryption is not supported for NFS shares yet. Azure provides a layer of encryption for all data in transit between Azure datacenters using MACSec. NFS shares can only be accessed from trusted virtual networks and over VPN tunnels. No additional transport layer encryption is available on NFS shares.
+Double encryption isn't supported for NFS shares yet. Azure provides a layer of encryption for all data in transit between Azure datacenters using MACSec. NFS shares can only be accessed from trusted virtual networks and over VPN tunnels. No additional transport layer encryption is available on NFS shares.
#### Solution
To check if the NFS package is installed, run: `rpm qa | grep nfs-utils`
#### Solution
-If the package is not installed, install the package on your distribution.
+If the package isn't installed, install the package on your distribution.
##### Ubuntu or Debian
The NFS protocol communicates to its server over port 2049, make sure that this
#### Solution
-Verify that port 2049 is open on your client by running the following command: `telnet <storageaccountnamehere>.file.core.windows.net 2049`. If the port is not open, open it.
+Verify that port 2049 is open on your client by running the following command: `telnet <storageaccountnamehere>.file.core.windows.net 2049`. If the port isn't open, open it.
## ls hangs for large directory enumeration on some kernels ### Cause: A bug was introduced in Linux kernel v5.11 and was fixed in v5.12.5.
-Some kernel versions have a bug which causes directory listings to result in an endless READDIR sequence. Very small directories where all entries can be shipped in one call will not have the problem.
+Some kernel versions have a bug that causes directory listings to result in an endless READDIR sequence. Very small directories where all entries can be shipped in one call won't have the problem.
The bug was introduced in Linux kernel v5.11 and was fixed in v5.12.5. So anything in between has the bug. RHEL 8.4 is known to have this kernel version. #### Workaround: Downgrading or upgrading the kernel
storage Storage Troubleshooting Files Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshooting-files-performance.md
Title: Azure file shares performance troubleshooting guide description: Troubleshoot known performance issues with Azure file shares. Discover potential causes and associated workarounds when these problems are encountered.-+ Last updated 07/06/2021-+ #Customer intent: As a < type of user >, I want < what? > so that < why? >.
storage Storage Blobs Container Calculate Size Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-size-cli.md
This script uses the following commands to calculate the size of the Blob storag
||| | [az group create](/cli/azure/group) | Creates a resource group in which all resources are stored. | | [az storage blob upload](/cli/azure/storage/account) | Uploads local files to an Azure Blob storage container. |
-| [az storage blob list](/cli/azure/storage/account/keys) | Lists the blobs in an Azure Blob storage container. |
+| [az storage blob list](/cli/azure/storage/blob?view=azure-cli-latest#az-storage-blob-list) | Lists the blobs in an Azure Blob storage container. |
## Next steps
virtual-machine-scale-sets Virtual Machine Scale Sets Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md
The following example shows how to add a second IP Configuration to your NIC.
In order to enhance default network security, [virtual machine scale sets with Flexible orchestration](..\virtual-machines\flexible-virtual-machine-scale-sets.md) will require that instances created implicitly via the autoscaling profile have outbound connectivity defined explicitly through one of the following methods: -- For most scenarios, we recommend [NAT Gateway attached to the subnet](../virtual-network/nat-gateway/tutorial-create-nat-gateway-portal.md).
+- For most scenarios, we recommend [NAT Gateway attached to the subnet](../virtual-network/nat-gateway/quickstart-create-nat-gateway-portal.md).
- For scenarios with high security requirements or when using Azure Firewall or Network Virtual Appliance (NVA), you can specify a custom User Defined Route as next hop through firewall. - Instances are in the backend pool of a Standard SKU Azure Load Balancer. - Attach a Public IP Address to the instance network interface.
See [Default outbound access in Azure](../virtual-network/ip-services/default-ou
## Next steps
-For more information about Azure virtual networks, see [Azure virtual networks overview](../virtual-network/virtual-networks-overview.md).
+For more information about Azure virtual networks, see [Azure virtual networks overview](../virtual-network/virtual-networks-overview.md).
virtual-machines Flexible Virtual Machine Scale Sets Migration Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/flexible-virtual-machine-scale-sets-migration-resources.md
Networking outbound access behavior will vary depending on how you choose to cre
In order to enhance default network security, **virtual machine instances created implicitly via the autoscaling profile do not have default outbound access**. In order to use virtual machine scale sets with implicitly created VM instances, outbound access must be explicitly defined through one of the following methods: -- For most scenarios, we recommend [NAT Gateway attached to the subnet](../virtual-network/nat-gateway/tutorial-create-nat-gateway-portal.md).
+- For most scenarios, we recommend [NAT Gateway attached to the subnet](../virtual-network/nat-gateway/quickstart-create-nat-gateway-portal.md).
- For scenarios with high security requirements or when using Azure Firewall or Network Virtual Appliance (NVA), you can specify a custom User Defined Route as next hop through firewall. - Instances are in the backend pool of a Standard SKU Azure Load Balancer. - Attach a Public IP Address to the instance network interface.
Virtual machine scale sets in Flexible orchestration mode do not currently have
## Next steps > [!div class="nextstepaction"]
-> [Compare the API differences between Uniform and Flexible orchestration modes.](../virtual-machine-scale-sets/orchestration-modes-api-comparison.md)
+> [Compare the API differences between Uniform and Flexible orchestration modes.](../virtual-machine-scale-sets/orchestration-modes-api-comparison.md)
virtual-machines Flatcar Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/flatcar-create-upload-vhd.md
Linux for each of the Flatcar supported channels:
- [stable](https://stable.release.flatcar-linux.net/amd64-usr/current/flatcar_production_azure_image.vhd.bz2) - [beta](https://beta.release.flatcar-linux.net/amd64-usr/current/flatcar_production_azure_image.vhd.bz2) - [alpha](https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_azure_image.vhd.bz2)-- [edge](https://edge.release.flatcar-linux.net/amd64-usr/current/flatcar_production_azure_image.vhd.bz2)
+- [LTS](https://lts.release.flatcar-linux.net/amd64-usr/current/flatcar_production_azure_image.vhd.bz2)
This image is already fully set up and optimized to run on Azure. You only need to decompress it.
virtual-machines N Series Driver Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/n-series-driver-setup.md
sudo reboot
sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm sudo yum install dkms
- CUDA_REPO_PKG=cuda-repo-rhel7-10.0.130-1.x86_64.rpm
- wget https://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/${CUDA_REPO_PKG} -O /tmp/${CUDA_REPO_PKG}
-
- sudo rpm -ivh /tmp/${CUDA_REPO_PKG}
- rm -f /tmp/${CUDA_REPO_PKG}
+ wget https://developer.download.nvidia.com/compute/cuda/repos/rhel7/x86_64/cuda-rhel7.repo /etc/yum.repos.d/cuda-rhel7.repo
sudo yum install cuda-drivers ```
For example, CentOS 8 and RHEL 8 will need the following steps.
sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm sudo yum install dkms
- CUDA_REPO_PKG=cuda-repo-rhel8-10.2.89-1.x86_64.rpm
- wget https://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/${CUDA_REPO_PKG} -O /tmp/${CUDA_REPO_PKG}
-
- sudo rpm -ivh /tmp/${CUDA_REPO_PKG}
- rm -f /tmp/${CUDA_REPO_PKG}
+ wget https://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo /etc/yum.repos.d/cuda-rhel8.repo
sudo yum install cuda-drivers ```
virtual-machines Troubleshooting Shared Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/troubleshooting-shared-images.md
If you have problems performing any operations on Azure Compute Gallery (formerl
## Creating or modifying a gallery ##
-**Message**: *Gallery name is invalid. Allowed characters are English alphanumeric characters, with underscores, and periods allowed in the middle, up to 80 characters total. All other special characters, including dashes, are disallowed.*
-**Cause**: The name for the gallery does not meet the naming requirements.
-**Workaround**: Choose a name that meets the following conditions:
-- Has an 80-character limit-- Contains only English letters, numbers, underscores, and periods-- Starts and ends with English letters or numbers-
-**Message**: *The provided resource name \<galleryName\> has these invalid trailing characters: \<character\>. The name can not end with characters: \<character\>*
-**Cause**: The name for the gallery ends with a period or underscore.
-**Workaround**: Choose a name for the gallery that meets the following conditions:
-- Has an 80-character limit-- Contains only English letters, numbers, underscores, and periods-- Starts and ends with English letters or numbers-
-**Message**: *The provided location \<region\> is not available for resource type 'Microsoft.Compute/galleries'. List of available regions for the resource type is …*
-**Cause**: The region specified for the gallery is incorrect or requires an access request.
-**Workaround**: Check that the region name is correct. If the region name is correct, submit [an access request](/troubleshoot/azure/general/region-access-request-process) for the region.
-
-**Message**: *Can not delete resource before nested resources are deleted.*
-**Cause**: You've tried to delete a gallery that contains at least one existing image definition. A gallery must be empty before it can be deleted.
-**Workaround**: Delete all image definitions inside the gallery and then proceed to delete the gallery. If the image definition contains image versions, you must delete the image versions before you delete the image definitions.
-
-**Message**: *The gallery name \<galleryName\> is not unique within the subscription \<subscriptionID\>. Please pick another gallery name.*
-**Cause**: You have an existing gallery with the same name and have tried to create another gallery with the same name.
-**Workaround**: Choose a different name for the gallery.
-
-**Message**: *The resource \<galleryName\> already exists in location \<region\_1\> in resource group \<resourceGroup\>. A resource with the same name cannot be created in location \<region\_2\>. Please select a new resource name.*
-**Cause**: You have an existing gallery with the same name and have tried to create another gallery with the same name.
-**Workaround**: Choose a different name for the gallery.
+| Error Message | Cause | Mitigation |
+| :-| :--| :-|
+| Gallery name is invalid. Allowed characters are English alphanumeric characters, with underscores, and periods allowed in the middle, up to 80 characters total. All other special characters, including dashes, are disallowed. | The name for the gallery does not meet the naming requirements | Choose a name that meets the following conditions: - <ol><li> Has an 80-character limit</li><li>Contains only English letters, numbers, underscores, and periods</li><li>Starts and ends with English letters or numbers</ol> |
+| The provided resource name \<galleryName\> has these invalid trailing characters: \<character\>. The name can not end with characters: \<character\> | The name for the gallery ends with a period or underscore | Choose a name for the gallery that meets the following conditions: - <ol><li> Has an 80-character limit</li><li>Contains only English letters, numbers, underscores, and periods</li><li>Starts and ends with English letters or numbers</ol> |
+| The provided location \<region\> is not available for resource type 'Microsoft.Compute/galleries'. List of available regions for the resource type is … | The region specified for the gallery is incorrect or requires an access request | Check that the region name is correct. If the region name is correct, submit [an access request](/troubleshoot/azure/general/region-access-request-process) for the region|
+| Can not delete resource before nested resources are deleted | You've tried to delete a gallery that contains at least one existing image definition. A gallery must be empty before it can be deleted | Delete all image definitions inside the gallery and then proceed to delete the gallery. If the image definition contains image versions, you must delete the image versions before you delete the image definitions |
+| The gallery name \<galleryName\> is not unique within the subscription \<subscriptionID\>. Please pick another gallery name | You have an existing gallery with the same name and have tried to create another gallery with the same name | Choose a different name for the gallery |
+| The resource \<galleryName\> already exists in location \<region\_1\> in resource group \<resourceGroup\>. A resource with the same name cannot be created in location \<region\_2\>. Please select a new resource name | You have an existing gallery with the same name and have tried to create another gallery with the same name | Choose a different name for the gallery
## Creating or modifying image definitions ##
virtual-machines Trusted Launch Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-portal.md
Title: Deploy a trusted launch VM description: Deploy a VM that uses trusted launch.--++ Previously updated : 12/07/2021 Last updated : 03/22/2022 # Deploy a VM with trusted launch enabled
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
[Trusted launch](trusted-launch.md) is a way to improve the security of [generation 2](generation-2.md) VMs. Trusted launch protects against advanced and persistent attack techniques by combining infrastructure technologies like vTPM and secure boot.
- Configure machines to automatically install the Azure Monitor and Azure Security agents on virtual machines
-## Deploy a trusted VM
+## Deploy a trusted launch VM
Create a virtual machine with trusted launch enabled. Choose an option below: ### [Portal](#tab/portal)
You can deploy trusted launch VMs using a quickstart template:
+## Deploy a trusted launch VM from an Azure Compute Gallery image
+
+### [Portal](#tab/portal2)
+
+1. Sign in to the Azure [portal](https://portal.azure.com).
+2. To create an Azure Compute Gallery Image from a VM, open an existing Trusted launch VM and select **Capture**.
+3. In the Create an Image page that follows, allow the image to be shared to the gallery as a VM image version as Managed Images are not supported for Trusted Launch.
+4. Create a new target Azure Compute Gallery or select an existing gallery.
+5. Select the **Operating system state** as either **Generalized** or **Specialized**.
+6. Create a new image definition by providing a name, publisher, offer and SKU details. The Security Type of the image definition is already set to 'Trusted launch'.
+7. Provide a version number for the image version.
+8. Modify replication options if required.
+9. At the bottom of the **Create an Image** page, select **Review + Create** and when validation shows as passed, select **Create**.
+10. Once the image version is created, go the image version directly. Alternatively, you can navigate to the required image version through the image definition.
+11. On the **VM image version** page, select the **+ Create VM** to land on the Create a virtual machine page.
+12. In the Create a virtual machine page, under **Resource group**, select **Create new** and type a name for your resource group or select an existing resource group from the dropdown.
+13. Under **Instance details**, type a name for the virtual machine name and choose a region that supports [trusted launch](trusted-launch.md#limitations).
+14. The image and the security type are already populated based on the selected image version. The **Secure Boot** and **vTPM** checkboxes are enabled by default.
+15. Fill in the **Administrator account** information and then **Inbound port rules**.
+16. At the bottom of the page, select **Review + Create**
+17. On the **Create a virtual machine** page, you can see the details about the VM you are about to deploy. Once validation shows as passed, select **Create**.
+
+### [CLI](#tab/cli2)
+
+Make sure you are running the latest version of Azure CLI
+
+Sign in to Azure using `az login`.
+
+```azurecli-interactive
+az login
+```
+
+Create an image definition with TrustedLaunch security type
+
+```azurecli-interactive
+az sig image-definition create --resource-group MyResourceGroup --location eastus \
+--gallery-name MyGallery --gallery-image-definition MyImageDef \
+--publisher TrustedLaunchPublisher --offer TrustedLaunchOffer --sku TrustedLaunchSku \
+--os-type Linux --os-state Generalized \
+--hyper-v-generation V2 \
+--features SecurityType=TrustedLaunch
+```
+
+Generalize the VM using waagagent command and create an image version with an existing Trusted Launch VM as image source
+
+```azurecli-interactive
+az sig image-version create --resource-group MyResourceGroup \
+--gallery-name MyGallery --gallery-image-definition MyImageDef \
+--gallery-image-version 1.0.0 \
+--managed-image /subscriptions/00000000-0000-0000-0000-00000000xxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Compute/virtualMachines/myVM
+```
+Create a Trusted Launch VM from the above image version
+
+```azurecli-interactive
+adminUsername=linuxvm
+az vm create --resource-group MyResourceGroup \
+ --name myTrustedLaunchVM \
+ --image "/subscriptions/00000000-0000-0000-0000-00000000xxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Compute/galleries/MyGallery/images/MyImageDef" \
+ --security-type TrustedLaunch \
+ --enable-secure-boot true \
+ --enable-vtpm true \
+ --admin-username $adminUsername \
+ --generate-ssh-keys
+```
+
+### [PowerShell](#tab/powershell2)
+
+Create an image definition with `TrustedLaunch` security type
+
+```azurepowershell-interactive
+$rgName = "MyResourceGroup"
+$galleryName = "MyGallery"
+$galleryImageDefinitionName = "MyImageDef"
+$location = "eastus"
+$publisherName = "TrustedlaunchPublisher"
+$offerName = "TrustedlaunchOffer"
+$skuName = "TrustedlaunchSku"
+$description = "My gallery"
+$SecurityType = @{Name='SecurityType';Value='TrustedLaunch'}
+$features = @($SecurityType)
+New-AzGalleryImageDefinition -ResourceGroupName $rgName -GalleryName $galleryName -Name $galleryImageDefinitionName -Location $location -Publisher $publisherName -Offer $offerName -Sku $skuName -HyperVGeneration "V2" -OsState "Generalized" -OsType "Windows" -Description $description -Feature $features
+```
+
+Generalize the VM using sysprep tool and create an image version with an existing Trusted Launch VM as image source
+
+```azurepowershell-interactive
+$rgName = "MyResourceGroup"
+$galleryName = "MyGallery"
+$galleryImageDefinitionName = "MyImageDef"
+$location = "eastus"
+$galleryImageVersionName = "1.0.0"
+$sourceImageId = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myVMRG/providers/Microsoft.Compute/virtualMachines/myVM"
+New-AzGalleryImageVersion -ResourceGroupName $rgName -GalleryName $galleryName -GalleryImageDefinitionName $galleryImageDefinitionName -Name $galleryImageVersionName -Location $location -SourceImageId $sourceImageId
+```
+Create a Trusted Launch VM from the above image version
+
+```azurepowershell-interactive
+$rgName = "MyResourceGroup"
+$galleryName = "MyGallery"
+$galleryImageDefinitionName = "MyImageDef"
+$location = "eastus"
+$vmName = "myVMfromImage"
+$vmSize = "Standard_D2s_v3"
+$imageDefinition = Get-AzGalleryImageDefinition `
+ -GalleryName $galleryName `
+ -ResourceGroupName $rgName `
+ -Name $galleryImageDefinitionName
+$cred = Get-Credential `
+ -Message "Enter a username and password for the virtual machine"
+# Network pieces
+$subnetConfig = New-AzVirtualNetworkSubnetConfig `
+ -Name mySubnet `
+ -AddressPrefix 192.168.1.0/24
+$vnet = New-AzVirtualNetwork `
+ -ResourceGroupName $resourceGroup `
+ -Location $location `
+ -Name MYvNET `
+ -AddressPrefix 192.168.0.0/16 `
+ -Subnet $subnetConfig
+$pip = New-AzPublicIpAddress `
+ -ResourceGroupName $resourceGroup `
+ -Location $location `
+ -Name "mypublicdns$(Get-Random)" `
+ -AllocationMethod Static `
+ -IdleTimeoutInMinutes 4
+$nsgRuleRDP = New-AzNetworkSecurityRuleConfig `
+ -Name myNetworkSecurityGroupRuleRDP `
+ -Protocol Tcp `
+ -Direction Inbound `
+ -Priority 1000 `
+ -SourceAddressPrefix * `
+ -SourcePortRange * `
+ -DestinationAddressPrefix * `
+ -DestinationPortRange 3389 `
+ -Access Deny
+$nsg = New-AzNetworkSecurityGroup `
+ -ResourceGroupName $resourceGroup `
+ -Location $location `
+ -Name myNetworkSecurityGroup `
+ -SecurityRules $nsgRuleRDP
+$nic = New-AzNetworkInterface `
+ -Name myNic `
+ -ResourceGroupName $resourceGroup `
+ -Location $location `
+ -SubnetId $vnet.Subnets[0].Id `
+ -PublicIpAddressId $pip.Id `
+ -NetworkSecurityGroupId $nsg.Id
+$vm = New-AzVMConfig -vmName $vmName -vmSize $vmSize | `
+ Set-AzVMOperatingSystem -Windows -ComputerName $vmName -Credential $cred | `
+ Set-AzVMSourceImage -Id $imageDefinition.Id | `
+ Add-AzVMNetworkInterface -Id $nic.Id
+$vm = Set-AzVMSecurityProfile -SecurityType "TrustedLaunch" -VM $vm
+$vm = Set-AzVmUefi -VM $vm `
+ -EnableVtpmΓÇ»$trueΓÇ»`
+ -EnableSecureBootΓÇ»$true
+New-AzVM `
+ -ResourceGroupName $rgName `
+ -Location $location `
+ -VM $vm
+```
+ ## Verify or update your settings For VMs created with trusted launch enabled, you can view the trusted launch configuration by visiting the **Overview** page for the VM in the portal. The **Properties** tab will show the status of Trusted Launch features:
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
Title: Trusted launch for Azure VMs description: Learn about trusted launch for Azure virtual machines.--++ Previously updated : 11/29/2021 Last updated : 05/02/2022 # Trusted launch for Azure virtual machines
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
Azure offers trusted launch as a seamless way to improve the security of [generation 2](generation-2.md) VMs. Trusted launch protects against advanced and persistent attack techniques. Trusted launch is composed of several, coordinated infrastructure technologies that can be enabled independently. Each technology provides another layer of defense against sophisticated threats.
Azure offers trusted launch as a seamless way to improve the security of [genera
- DCsv2-series - Dv4-series, Dsv4-series, Dsv3-series, Dsv2-series - Ddv4-series, Ddsv4-series-- Fsv2-series
+- Dv5-series, Dsv5-series
+- Ddv5-series, Ddsv5-series
+- Dasv5-series, Dadsv5-series
+- Ev5-series, Esv5-series
+- Edv5-series, Edsv5-series
+- Easv5-series, Eadsv5-series
+- Ebsv5-series, Ebdsv5-series
- Eav4-series, Easv4-series - Ev4-series, Esv4-series, Esv3-series - Edv4-series, Edsv4-series
+- Fsv2-series
- Lsv2-series **OS support**:-- Redhat Enterprise Linux 8.3-- SUSE 15 SP2-- Ubuntu 20.04 LTS-- Ubuntu 18.04 LTS
+- Redhat Enterprise Linux 8.3, 8.4, 8.5 LVM
+- SUSE Enterprise Linux 15 SP3
+- Ubuntu Server 22.04 LTS
+- Ubuntu Server 20.04 LTS
+- Ubuntu Server 18.04 LTS
- Debian 11-- CentOS 8.4-- Oracle Linux 8.3
+- CentOS 8.3, 8.4
+- Oracle Linux 8.3 LVM
- CBL-Mariner - Windows Server 2022 - Windows Server 2019
No additional cost to existing VM pricing.
**The following features are not supported**: - Azure Site Recovery-- Azure Compute Gallery (formerly known as Shared Image Gallery)-- [Ephemeral OS disk (Preview)](ephemeral-os-disks.md#trusted-launch-for-ephemeral-os-disks-preview) - Shared disk - Ultra disk - Managed image
Trusted launch is integrated with Azure Defender for Cloud to ensure your VMs ar
- **Recommendation to enable Secure Boot** - This Recommendation only applies for VMs that support trusted launch. Azure Defender for Cloud will identify VMs that can enable Secure Boot, but have it disabled. It will issue a low severity recommendation to enable it. - **Recommendation to enable vTPM** - If your VM has vTPM enabled, Azure Defender for Cloud can use it to perform Guest Attestation and identify advanced threat patterns. If Azure Defender for Cloud identifies VMs that support trusted launch and have vTPM disabled, it will issue a low severity recommendation to enable it. - **Recommendation to install guest attestation extension** - If your VM has secure boot and vTPM enabled but it doesn't have the guest attestation extension installed, Azure Defender for Cloud will issue a low severity recommendation to install the guest attestation extension on it. This extension allows Azure Defender for Cloud to proactively attest and monitor the boot integrity of your VMs. Boot integrity is attested via remote attestation.-- **Attestation health assessment** - If your VM has vTPM enabled and attestation extension installed, Azure Defender for Cloud can remotely validate that your VM booted in a healthy way. This is known as remote attestation. Azure Defender for Cloud issues an assessment, indicating the status of remote attestation.
+- **Attestation health assessment or Boot Integrity Monitoring** - If your VM has Secure Boot and vTPM enabled and attestation extension installed, Azure Defender for Cloud can remotely validate that your VM booted in a healthy way. This is known as boot integrity monitoring. Azure Defender for Cloud issues an assessment, indicating the status of remote attestation. Currently boot integrity monitoring is supported for both Windows and Linux singe virtual machines and uniform scale sets.
## Microsoft Defender for Cloud integration
Azure Defender for Cloud periodically performs attestation. If the attestation f
- The attestation quote could not be verified to originate from the vTPM of the attested VM. This can indicate that malware is present and may be intercepting traffic to the TPM. - The attestation extension on the VM is not responding. This can indicate a denial-of-service attack by malware, or an OS admin.
-### How does trusted launch compared to Hyper-V Shielded VM?
+### How does trusted launch compare to Hyper-V Shielded VM?
Hyper-V Shielded VM is currently available on Hyper-V only. [Hyper-V Shielded VM](/windows-server/security/guarded-fabric-shielded-vm/guarded-fabric-and-shielded-vms) is typically deployed in conjunction with Guarded Fabric. A Guarded Fabric consists of a Host Guardian Service (HGS), one or more guarded hosts, and a set of Shielded VMs. Hyper-V Shielded VMs are intended for use in fabrics where the data and state of the virtual machine must be protected from both fabric administrators and untrusted software that might be running on the Hyper-V hosts. Trusted launch on the other hand can be deployed as a standalone virtual machine or virtual machine scale sets on Azure without additional deployment and management of HGS. All of the trusted launch features can be enabled with a simple change in deployment code or a checkbox on the Azure portal.
+### Does trusted launch support Azure Compute Gallery?
+
+Trusted launch now allows images to be created and shared through the Azure Compute Gallery (formerly Shared Image Gallery). The image source can be an existing Azure VM which is either generalized or specialized, an existing managed disk or a snapshot, a VHD or an image version from another gallery. To deploy a Trusted Launch VM from an Azure Compute Gallery image version see [trusted launch VM](https://aka.ms/trusted-launch-azurecomputegallery).
+
+### Does trusted launch support Azure Backup?
+
+Trusted launch now supports Azure Backup in preview. For more information, see [Support matrix for Azure VM backup](../backup/backup-support-matrix-iaas.md#vm-compute-support).
+
+### Does trusted launch support ephemeral OS disks?
+
+Trusted launch now supports ephemeral OS disks in preview. Note that, while using ephemeral disks for Trusted Launch VMs, keys and secrets generated or sealed by the vTPM after the creation of the VM may not be persisted across operations like reimaging and platform events like service healing. For more information, see [Trusted Launch for Ephemeral OS disks (Preview)](https://aka.ms/ephemeral-os-disks-support-trusted-launch).
+ ### What is VM Guest State (VMGS)? VM Guest State (VMGS) is specific to Trusted Launch VM. It is a blob that is managed by Azure and contains the unified extensible firmware interface (UEFI) secure boot signature databases and other security information. The lifecycle of the VMGS blob is tied to that of the OS Disk. ## Next steps
-Deploy a [trusted launch VM using the portal](trusted-launch-portal.md).
+Deploy a [trusted launch VM](trusted-launch-portal.md).
virtual-machines High Availability Guide Standard Load Balancer Outbound Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-standard-load-balancer-outbound-connections.md
The configuration would look like:
2. Create Backend pool **MyBackendPoolOfPublicILB** and add the VMs. 1. Select the Virtual network 1. Select the VMs and their IP addresses and add them to the backend pool
-3. Create a NAT gateway for outbound internet access. For more information see [Tutorial: Create a NAT gateway - Azure CLI](../../../virtual-network/nat-gateway/tutorial-create-nat-gateway-cli.md).
+3. Create a NAT gateway for outbound internet access. For more information see [Quickstart: Create a NAT gateway - Azure CLI](../../../virtual-network/nat-gateway/quickstart-create-nat-gateway-cli.md).
4. Create Network Security group rules to restrict access to specific Public End Points. If there is existing Network Security Group, you can adjust it. The example below shows how to enable access to the Azure management API: 1. Navigate to the Network Security Group 1. Click Outbound Security Rules
If outbound traffic is routed via third party, URL-based firewall proxy:
## Next steps * [Learn how to configure Pacemaker on SUSE in Azure](./high-availability-guide-suse-pacemaker.md)
-* [Learn how to configure Pacemaker on Red Hat in Azure](./high-availability-guide-rhel-pacemaker.md)
+* [Learn how to configure Pacemaker on Red Hat in Azure](./high-availability-guide-rhel-pacemaker.md)
virtual-machines Sap Hana High Availability Netapp Files Red Hat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-red-hat.md
vm-linux Previously updated : 03/24/2021 Last updated : 05/10/2022
[anf-azure-doc]:/azure/azure-netapp-files/ [anf-avail-matrix]:https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=all
-[anf-sap-applications-azure]:https://www.netapp.com/us/media/tr-4746.pdf
[2205917]:https://launchpad.support.sap.com/#/notes/2205917 [1944799]:https://launchpad.support.sap.com/#/notes/1944799
For more information about the required ports for SAP HANA, read the chapter [Co
10.32.0.5 hanadb2 ```
-3. **[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure]. Create configuration file */etc/sysctl.d/netapp-hana.conf* for the NetApp configuration settings.
+3. **[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/netapp-hana.conf* for the NetApp configuration settings.
<pre><code> vi /etc/sysctl.d/netapp-hana.conf # Add the following entries in the configuration file net.core.rmem_max = 16777216 net.core.wmem_max = 16777216
- net.core.rmem_default = 16777216
- net.core.wmem_default = 16777216
- net.core.optmem_max = 16777216
- net.ipv4.tcp_rmem = 65536 16777216 16777216
- net.ipv4.tcp_wmem = 65536 16777216 16777216
+ net.ipv4.tcp_rmem = 4096 131072 16777216
+ net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000 net.ipv4.tcp_slow_start_after_idle=0 net.ipv4.tcp_no_metrics_save = 1
For more information about the required ports for SAP HANA, read the chapter [Co
> [!TIP] > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more details see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
-5. **[A]** Adjust the sunrpc settings, as recommended in the [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure].
+5. **[A]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346).
<pre><code> vi /etc/modprobe.d/sunrpc.conf
virtual-machines Sap Hana High Availability Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-suse.md
vm-linux Previously updated : 03/24/2021 Last updated : 05/10/2022
For more information about the required ports for SAP HANA, read the chapter [Co
10.3.0.5 hanadb2 ```
-2.**[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files](https://www.netapp.com/pdf.html?item=/media/17152-tr4746pdf.pdf). Create configuration file */etc/sysctl.d/netapp-hana.conf* for the NetApp configuration settings.
+2.**[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/netapp-hana.conf* for the NetApp configuration settings.
``` vi /etc/sysctl.d/netapp-hana.conf # Add the following entries in the configuration file net.core.rmem_max = 16777216 net.core.wmem_max = 16777216
- net.core.rmem_default = 16777216
- net.core.wmem_default = 16777216
- net.core.optmem_max = 16777216
- net.ipv4.tcp_rmem = 65536 16777216 16777216
- net.ipv4.tcp_wmem = 65536 16777216 16777216
+ net.ipv4.tcp_rmem = 4096 131072 16777216
+ net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000 net.ipv4.tcp_slow_start_after_idle=0 net.ipv4.tcp_no_metrics_save = 1
For more information about the required ports for SAP HANA, read the chapter [Co
> [!TIP] > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more information, see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
-4.**[A]** Adjust the sunrpc settings, as recommended in the [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files](https://www.netapp.com/pdf.html?item=/media/17152-tr4746pdf.pdf).
+4.**[A]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346).
``` vi /etc/modprobe.d/sunrpc.conf
virtual-machines Sap Hana High Availability Scale Out Hsr Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-scale-out-hsr-rhel.md
vm-windows Previously updated : 02/11/2022 Last updated : 05/10/2022
[anf-azure-doc]:../../../azure-netapp-files/index.yml [anf-avail-matrix]:https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=all
-[anf-sap-applications-azure]:https://www.netapp.com/us/media/tr-4746.pdf
[2205917]:https://launchpad.support.sap.com/#/notes/2205917 [1944799]:https://launchpad.support.sap.com/#/notes/1944799
Some readers will benefit from consulting a variety of SAP notes and resources b
* Azure-specific RHEL documentation: * [Install SAP HANA on Red Hat Enterprise Linux for use in Microsoft Azure](https://access.redhat.com/public-cloud/microsoft-azure). * [Red Hat Enterprise Linux Solution for SAP HANA scale-out and system replication](https://access.redhat.com/solutions/4386601).
-* [NetApp SAP applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure].
* [Azure NetApp Files documentation][anf-azure-doc]. * [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md).
Configure and prepare your operating system by doing the following:
10.23.1.207 hana-s2-db3-hsr ```
-1. **[A]** Prepare the operating system for running SAP HANA. For more information, see [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure]. Create configuration file */etc/sysctl.d/netapp-hana.conf* for the Azure NetApp Files configuration settings.
+1. **[A]** Prepare the operating system for running SAP HANA. For more information, see SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/netapp-hana.conf* for the Azure NetApp Files configuration settings.
<pre><code> vi /etc/sysctl.d/netapp-hana.conf # Add the following entries in the configuration file net.core.rmem_max = 16777216 net.core.wmem_max = 16777216
- net.core.rmem_default = 16777216
- net.core.wmem_default = 16777216
- net.core.optmem_max = 16777216
- net.ipv4.tcp_rmem = 65536 16777216 16777216
- net.ipv4.tcp_wmem = 65536 16777216 16777216
+ net.ipv4.tcp_rmem = 4096 131072 16777216
+ net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000 net.ipv4.tcp_slow_start_after_idle=0 net.ipv4.tcp_no_metrics_save = 1
Configure and prepare your operating system by doing the following:
> [!TIP] > Avoid setting `net.ipv4.ip_local_port_range` and `net.ipv4.ip_local_reserved_ports` explicitly in the `sysctl` configuration files, to allow the SAP host agent to manage the port ranges. For more details, see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
-1. **[A]** Adjust the `sunrpc` settings, as recommended in [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure].
+1. **[A]** Adjust the `sunrpc` settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346).
<pre><code> vi /etc/modprobe.d/sunrpc.conf
virtual-machines Sap Hana High Availability Scale Out Hsr Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-high-availability-scale-out-hsr-suse.md
vm-windows Previously updated : 02/11/2022 Last updated : 05/10/2022
[anf-azure-doc]:../../../azure-netapp-files/index.yml [anf-avail-matrix]:https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=all
-[anf-sap-applications-azure]:https://www.netapp.com/us/media/tr-4746.pdf
[2205917]:https://launchpad.support.sap.com/#/notes/2205917 [1944799]:https://launchpad.support.sap.com/#/notes/1944799
Before you begin, refer to the following SAP notes and papers:
* [SUSE SAP HA Best Practice Guides][suse-ha-guide]: Contains all required information to set up NetWeaver High Availability and SAP HANA System Replication on-premises (to be used as a general baseline; they provide much more detailed information) * [SUSE High Availability Extension 12 SP5 Release Notes](https://www.suse.com/releasenotes/x86_64/SLE-HA/12-SP5/) * [Handling failed NFS share in SUSE HA cluster for HANA system replication](https://www.suse.com/support/kb/doc/?id=000019904)
-* [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure]
* [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) ## Overview
Configure and prepare your OS by doing the following steps:
10.23.1.201 hana-s2-db3-hsr ```
-3. **[A]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure]. Create configuration file */etc/sysctl.d/netapp-hana.conf* for the NetApp configuration settings.
+3. **[A]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/netapp-hana.conf* for the NetApp configuration settings.
<pre><code> vi /etc/sysctl.d/netapp-hana.conf # Add the following entries in the configuration file net.core.rmem_max = 16777216 net.core.wmem_max = 16777216
- net.core.rmem_default = 16777216
- net.core.wmem_default = 16777216
- net.core.optmem_max = 16777216
- net.ipv4.tcp_rmem = 65536 16777216 16777216
- net.ipv4.tcp_wmem = 65536 16777216 16777216
+ net.ipv4.tcp_rmem = 4096 131072 16777216
+ net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000 net.ipv4.tcp_slow_start_after_idle=0 net.ipv4.tcp_no_metrics_save = 1
Configure and prepare your OS by doing the following steps:
> [!TIP] > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more details see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
-4. **[A]** Adjust the sunrpc settings, as recommended in the [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure].
+4. **[A]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346).
<pre><code> vi /etc/modprobe.d/sunrpc.conf
virtual-machines Sap Hana Scale Out Standby Netapp Files Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-rhel.md
vm-windows Previously updated : 02/11/2022 Last updated : 05/09/2022
[anf-azure-doc]:/azure/azure-netapp-files/ [anf-avail-matrix]:https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=all
-[anf-sap-applications-azure]:https://www.netapp.com/us/media/tr-4746.pdf
+ [2205917]:https://launchpad.support.sap.com/#/notes/2205917 [1944799]:https://launchpad.support.sap.com/#/notes/1944799
Before you begin, refer to the following SAP notes and papers:
* [Red Hat Enterprise Linux Networking Guide](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide) * Azure-specific RHEL documentation: * [Install SAP HANA on Red Hat Enterprise Linux for Use in Microsoft Azure](https://access.redhat.com/public-cloud/microsoft-azure)
-* [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure]
* [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) ## Overview
Configure and prepare your OS by doing the following steps:
yum install nfs-utils </code></pre>
-3. **[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure]. Create configuration file */etc/sysctl.d/netapp-hana.conf* for the NetApp configuration settings.
+3. **[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/netapp-hana.conf* for the NetApp configuration settings.
<pre><code> vi /etc/sysctl.d/netapp-hana.conf # Add the following entries in the configuration file net.core.rmem_max = 16777216 net.core.wmem_max = 16777216
- net.core.rmem_default = 16777216
- net.core.wmem_default = 16777216
- net.core.optmem_max = 16777216
- net.ipv4.tcp_rmem = 65536 16777216 16777216
- net.ipv4.tcp_wmem = 65536 16777216 16777216
+ net.ipv4.tcp_rmem = 4096 131072 16777216
+ net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000 net.ipv4.tcp_slow_start_after_idle=0 net.ipv4.tcp_no_metrics_save = 1
Configure and prepare your OS by doing the following steps:
> [!TIP] > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more details see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
-5. **[A]** Adjust the sunrpc settings, as recommended in the [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure].
+5. **[A]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346).
<pre><code> vi /etc/modprobe.d/sunrpc.conf
In this example for deploying SAP HANA in scale-out configuration with standby n
- `async_write_submit_active` **on** - `async_write_submit_blocks` **all**
- For more information, see [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure].
+ For more information, see [I/O stack configuration for SAP HANA](https://docs.netapp.com/us-en/netapp-solutions-sap/bp/saphana_aff_nfs_i_o_stack_configuration_for_sap_hana.html).
Starting with SAP HANA 2.0 systems, you can set the parameters in `global.ini`. For more information, see SAP Note [1999930](https://launchpad.support.sap.com/#/notes/1999930).
virtual-machines Sap Hana Scale Out Standby Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse.md
vm-windows Previously updated : 02/11/2022 Last updated : 05/10/2022
[anf-azure-doc]:/azure/azure-netapp-files/ [anf-avail-matrix]:https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=all
-[anf-sap-applications-azure]:https://www.netapp.com/us/media/tr-4746.pdf
[2205917]:https://launchpad.support.sap.com/#/notes/2205917 [1944799]:https://launchpad.support.sap.com/#/notes/1944799
Before you begin, refer to the following SAP notes and papers:
* [Azure Virtual Machines DBMS deployment for SAP on Linux][dbms-guide] * [SUSE SAP HA Best Practice Guides][suse-ha-guide]: Contains all required information to set up NetWeaver High Availability and SAP HANA System Replication on-premises (to be used as a general baseline; they provide much more detailed information) * [SUSE High Availability Extension 12 SP3 Release Notes][suse-ha-12sp3-relnotes]
-* [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure]
* [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) ## Overview
Configure and prepare your OS by doing the following steps:
Reboot the VM to activate the changes.
-3. **[A]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure]. Create configuration file */etc/sysctl.d/netapp-hana.conf* for the NetApp configuration settings.
+3. **[A]** Prepare the OS for running SAP HANA on NetApp Systems with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/netapp-hana.conf* for the NetApp configuration settings.
<pre><code> vi /etc/sysctl.d/netapp-hana.conf # Add the following entries in the configuration file net.core.rmem_max = 16777216 net.core.wmem_max = 16777216
- net.core.rmem_default = 16777216
- net.core.wmem_default = 16777216
- net.core.optmem_max = 16777216
- net.ipv4.tcp_rmem = 65536 16777216 16777216
- net.ipv4.tcp_wmem = 65536 16777216 16777216
+ net.ipv4.tcp_rmem = 4096 131072 16777216
+ net.ipv4.tcp_wmem = 4096 16384 16777216
net.core.netdev_max_backlog = 300000 net.ipv4.tcp_slow_start_after_idle=0 net.ipv4.tcp_no_metrics_save = 1
Configure and prepare your OS by doing the following steps:
> [!TIP] > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more details see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421).
-4. **[A]** Adjust the sunrpc settings, as recommended in the [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure].
+4. **[A]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346).
<pre><code> vi /etc/modprobe.d/sunrpc.conf
In this example for deploying SAP HANA in scale-out configuration with standby n
- `async_write_submit_active` **on** - `async_write_submit_blocks` **all**
- For more information, see [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files][anf-sap-applications-azure].
+ For more information, see [I/O stack configuration for SAP HANA](https://docs.netapp.com/us-en/netapp-solutions-sap/bp/saphana_aff_nfs_i_o_stack_configuration_for_sap_hana.html).
Starting with SAP HANA 2.0 systems, you can set the parameters in `global.ini`. For more information, see SAP Note [1999930](https://launchpad.support.sap.com/#/notes/1999930).
virtual-network Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/cli-samples.md
The following table includes links to bash scripts with Azure CLI commands:
| [Filter inbound and outbound VM network traffic](./scripts/virtual-network-cli-sample-filter-network-traffic.md) | Creates a virtual network with front-end and back-end subnets. Inbound network traffic to the front-end subnet is limited to HTTP, HTTPS, and SSH. Outbound traffic to the internet from the back-end subnet is not permitted. | |[Configure IPv4 + IPv6 dual stack virtual network with Basic Load Balancer](./scripts/virtual-network-cli-sample-ipv6-dual-stack.md)|Deploys dual-stack (IPv4+IPv6) virtual network with two VMs and an Azure Basic Load Balancer with IPv4 and IPv6 public IP addresses. | |[Configure IPv4 + IPv6 dual stack virtual network with Standard Load Balancer](./scripts/virtual-network-cli-sample-ipv6-dual-stack-standard-load-balancer.md)|Deploys dual-stack (IPv4+IPv6) virtual network with two VMs and an Azure Standard Load Balancer with IPv4 and IPv6 public IP addresses. |
-|[Tutorial: Create and test a NAT gateway - Azure CLI](../virtual-network/nat-gateway/tutorial-create-nat-gateway-cli.md)|Create and validate a NAT gateway using a virtual machine. |
+|[Quickstart: Create and test a NAT gateway - Azure CLI](../virtual-network/nat-gateway/quickstart-create-nat-gateway-cli.md)|Create and validate a NAT gateway using a virtual machine. |
virtual-network Configure Public Ip Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-nat-gateway.md
In this section, you'll create a NAT gateway resource. You'll select the IP addr
7. Select **Create**. > [!NOTE]
-> This is a simple deployment of a NAT gateway. For advanced configuration and setup, see [Tutorial: Create a NAT gateway using the Azure portal](../nat-gateway/tutorial-create-nat-gateway-portal.md)
+> This is a simple deployment of a NAT gateway. For advanced configuration and setup, see [Quickstart: Create a NAT gateway using the Azure portal](../nat-gateway/quickstart-create-nat-gateway-portal.md)
> > For more information on Azure Virtual Network NAT, see [What is Azure Virtual Network NAT?](../nat-gateway/nat-overview.md)
In this section, you'll change the outbound IP configuration to use a public IP
In this article, you learned how to create a load NAT gateway and use an existing public IP. You replaced the IP address in a NAT gateway outbound IP configuration. Finally, you changed an outbound IP configuration to use a public IP prefix. - For more information about Azure Virtual Network NAT, see [What is Azure Virtual Network NAT?](../nat-gateway/nat-overview.md)-- To learn more about public IP addresses in Azure, see [Public IP addresses](./public-ip-addresses.md).
+- To learn more about public IP addresses in Azure, see [Public IP addresses](./public-ip-addresses.md).
virtual-network Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/custom-ip-address-prefix.md
When ready, you can issue the command to have your range advertised from Azure a
* Any IP addresses utilized from a custom IP prefix currently count against the standard public IP quota for a subscription and region. Contact Azure support to have quotas increased when required.
+* IPs brought to Azure cannot currently be used for Windows Server Activation.
+ ## Pricing * There is no charge to provision or use custom IP prefixes. There is no charge for any public IP prefixes and public IP addresses that are derived from custom IP prefixes.
virtual-network Public Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-address-prefix.md
You can associate the following resources to a static public IP address from a p
|Virtual machines| Associating public IPs from a prefix to your virtual machines in Azure reduces management overhead when adding IP addresses to an allowlist in the firewall. You can add an entire prefix with a single firewall rule. As you scale with virtual machines in Azure, you can associate IPs from the same prefix saving cost, time, and management overhead.| To associate IPs from a prefix to your virtual machine: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. [Associate the IP to your virtual machine's network interface.](./virtual-network-network-interface-addresses.md#add-ip-addresses) </br> You can also [associate the IPs to a Virtual Machine Scale Set](https://azure.microsoft.com/resources/templates/vmss-with-public-ip-prefix/). | Standard load balancers | Associating public IPs from a prefix to your frontend IP configuration or outbound rule of a load balancer ensures simplification of your Azure public IP address space. Simplify your scenario by grooming outbound connections from a range of contiguous IP addresses. | To associate IPs from a prefix to your load balancer: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. When creating the load balancer, select or update the IP created in step 2 above as the frontend IP of your load balancer. | | Azure Firewall | You can use a public IP from a prefix for outbound SNAT. All outbound virtual network traffic is translated to the [Azure Firewall](../../firewall/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) public IP. | To associate an IP from a prefix to your firewall: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. When you [deploy the Azure firewall](../../firewall/tutorial-firewall-deploy-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json#deploy-the-firewall), be sure to select the IP you previously gave from the prefix.|
-| VPN Gateway (AZ SKU), Application Gateway v2, NAT Gateway | You can use a public IP from a prefix for your gateway | To associate an IP from a prefix to your gateway: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. When you deploy the [VPN Gateway](../../vpn-gateway/tutorial-create-gateway-portal.md), [Application Gateway](../../application-gateway/quick-create-portal.md#create-an-application-gateway), or [NAT Gateway](../nat-gateway/tutorial-create-nat-gateway-portal.md), be sure to select the IP you previously gave from the prefix.|
+| VPN Gateway (AZ SKU), Application Gateway v2, NAT Gateway | You can use a public IP from a prefix for your gateway | To associate an IP from a prefix to your gateway: </br> 1. [Create a prefix.](manage-public-ip-address-prefix.md) </br> 2. [Create an IP from the prefix.](manage-public-ip-address-prefix.md) </br> 3. When you deploy the [VPN Gateway](../../vpn-gateway/tutorial-create-gateway-portal.md), [Application Gateway](../../application-gateway/quick-create-portal.md#create-an-application-gateway), or [NAT Gateway](../nat-gateway/quickstart-create-nat-gateway-portal.md), be sure to select the IP you previously gave from the prefix.|
Additionally, the Public IP address prefix resource can be utilized directly by certain resources:
virtual-network Virtual Network Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-public-ip-address.md
Learn how to assign a public IP address to the following resources:
- [Application Gateway](../../application-gateway/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) - [Site-to-site connection using a VPN gateway](../../vpn-gateway/tutorial-site-to-site-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) - [Virtual Machine Scale Set](../../virtual-machine-scale-sets/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json)-- [NAT gateway](../nat-gateway/tutorial-create-nat-gateway-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
+- [NAT gateway](../nat-gateway/quickstart-create-nat-gateway-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
- [Azure Bastion](../../bastion/quickstart-host-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) - [Azure Firewall](../../firewall/tutorial-firewall-deploy-portal-policy.md?toc=%2fazure%2fvirtual-network%2ftoc.json) - [Cross-region load balancer](../../load-balancer/tutorial-cross-region-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json)
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
For information on the SLA, see [SLA for Virtual Network NAT](https://azure.micr
## Next steps
-* To create and validate a NAT gateway, see [Tutorial: Create a NAT gateway using the Azure portal](tutorial-create-nat-gateway-portal.md)
+* To create and validate a NAT gateway, see [Quickstart: Create a NAT gateway using the Azure portal](quickstart-create-nat-gateway-portal.md)
* To view a video on more information about Azure Virtual Network NAT, see [How to get better outbound connectivity using an Azure NAT gateway](https://www.youtube.com/watch?v=2Ng_uM0ZaB4)
virtual-network Quickstart Create Nat Gateway Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-cli.md
+
+ Title: 'Quickstart: Create a NAT gateway - Azure CLI'
+
+description: Get started creating a NAT gateway using the Azure CLI.
+++++ Last updated : 02/04/2022+++
+# Quickstart: Create a NAT gateway using the Azure CLI
+
+This quickstart shows you how to use Azure Virtual Network NAT service. You'll create a NAT gateway to provide outbound connectivity for a virtual machine in Azure.
++++
+## Set parameter values to create resources
+
+Set the parameter values for use in creating the required resources. The $RANDOM function is used to create unique object names.
++
+## Create a resource group
+
+Create a resource group with [az group create](/cli/azure/group#az-group-create). An Azure resource group is a logical container into which Azure resources are deployed and managed.
++
+## Create the NAT gateway
+
+In this section we create the NAT gateway and supporting resources.
+
+### Create public IP address
+
+To access the Internet, you need one or more public IP addresses for the NAT gateway. Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public IP address resource.
++
+### Create NAT gateway resource
+
+Create a global Azure NAT gateway with [az network nat gateway create](/cli/azure/network/nat#az-network-nat-gateway-create). The result of this command will create a gateway resource that uses the public IP address defined in the previous step. The idle timeout is set to 10 minutes.
++
+### Create virtual network
+
+Create a virtual network with a subnet with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). The IP address space for the virtual network is **10.1.0.0/16**. The subnet within the virtual network is **10.1.0.0/24**.
++
+### Create bastion host subnet
+
+Create an Azure Bastion host to access the virtual machine.
+
+Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) to create a Azure Bastion subnet.
++
+### Create public IP address for the bastion host
+
+Create a public IP address for the bastion host with [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create).
++
+### Create the bastion host
+
+Use [az network bastion create](/cli/azure/network/bastion#az-network-bastion-create) to create the bastion host.
++
+### Configure NAT service for source subnet
+
+Configure the source subnet in virtual network to use a specific NAT gateway resource with [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update). This command will activate the NAT service on the specified subnet.
++
+All outbound traffic to Internet destinations is now using the NAT gateway. It's not necessary to configure a UDR.
+
+## Create virtual machine
+
+Create a virtual machine to test the NAT gateway to verify the public IP address of the outbound connection.
+
+Create the virtual machine with [az vm create](/cli/azure/vm#az-vm-create).
++
+Wait for the virtual machine creation to complete before moving on to the next section.
+
+## Test NAT gateway
+
+In this section, we'll test the NAT gateway. We'll first discover the public IP of the NAT gateway. We'll then connect to the test virtual machine and verify the outbound connection through the NAT gateway.
+
+1. Sign in to the [Azure portal](https://portal.azure.com)
+
+1. Find the public IP address for the NAT gateway on the **Overview** screen. Select **All services** in the left-hand menu, select **All resources**, and then select **myPublicIP**.
+
+1. Make note of the public IP address:
+
+ :::image type="content" source="./media/tutorial-create-nat-gateway-portal/find-public-ip.png" alt-text="Discover public IP address of NAT gateway" border="true":::
+
+1. Select **All services** in the left-hand menu, select **All resources**, and then from the resources list, select **myVM** that is located in the **myResourceGroupNAT** resource group.
+
+1. On the **Overview** page, select **Connect**, then **Bastion**.
+
+1. Select the blue **Use Bastion** button.
+
+1. Enter the username and password entered during VM creation.
+
+1. Open **Internet Explorer** on **myTestVM**.
+
+1. Enter **https://whatsmyip.com** in the address bar.
+
+1. Verify the IP address displayed matches the NAT gateway address you noted in the previous step:
+
+ :::image type="content" source="./media/tutorial-create-nat-gateway-portal/my-ip.png" alt-text="Internet Explorer showing external outbound IP" border="true":::
+
+## Clean up resources
+
+If you're not going to continue to use this application, delete
+the virtual network, virtual machine, and NAT gateway with the following CLI command:
+
+```azurecli-interactive
+ az group delete \
+ --name $resourceGroup
+```
+
+## Next steps
+
+For more information on Azure Virtual Network NAT, see:
+> [!div class="nextstepaction"]
+> [Virtual Network NAT overview](nat-overview.md)
virtual-network Quickstart Create Nat Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-portal.md
+
+ Title: 'Quickstart: Create a NAT gateway - Azure portal'
+
+description: This quickstart shows how to create a NAT gateway by using the Azure portal.
+++++ Last updated : 03/02/2021+++
+# Quickstart: Create a NAT gateway using the Azure portal
+
+This quickstart shows you how to use Azure Virtual Network NAT service. You'll create a NAT gateway to provide outbound connectivity for a virtual machine in Azure.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Virtual network
+
+Before you deploy a VM and can use your NAT gateway, we need to create the resource group and virtual network.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. On the upper-left side of the screen, select **Create a resource > Networking > Virtual network** or search for **Virtual network** in the search box.
+
+3. Select **Create**.
+
+4. In **Create virtual network**, enter or select this information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ ||--|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **Create new**. </br> Enter **myResourceGroupNAT**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myVNet** |
+ | Region | Select **(Europe) West Europe** |
+
+5. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+
+6. In the **IP Addresses** tab, enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | IPv4 address space | Enter **10.1.0.0/16** |
+
+7. Select **+ Add subnet**.
+
+8. In **Edit subnet**, enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | Subnet name | Enter **mySubnet** |
+ | Subnet address range | Enter **10.1.0.0/24** |
+
+9. Select **Add**.
+
+10. Select the **Security** tab.
+
+11. Under **BastionHost**, select **Enable**. Enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | Bastion name | Enter **myBastionHost** |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/24** |
+ | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
+
+12. Select the **Review + create** tab or select the **Review + create** button.
+
+13. Select **Create**.
+
+## NAT gateway
+
+You can use one or more public IP address resources, public IP prefixes, or both. We'll add a public IP resource and a NAT gateway resource.
+
+1. On the upper-left side of the screen, select **Create a resource > Networking > NAT gateway** or search for **NAT gateway** in the search box.
+
+2. Select **Create**.
+
+3. In **Create network address translation (NAT) gateway**, enter or select this information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ ||--|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **myResourceGroupNAT**. |
+ | **Instance details** | |
+ | Name | Enter **myNATgateway** |
+ | Region | Select **(Europe) West Europe** |
+ | Availability Zone | Select **None**. |
+ | Idle timeout (minutes) | Enter **10**. |
+
+4. Select the **Outbound IP** tab, or select the **Next: Outbound IP** button at the bottom of the page.
+
+5. In the **Outbound IP** tab, enter or select the following information:
+
+ | **Setting** | **Value** |
+ | -- | |
+ | Public IP addresses | Select **Create a new public IP address**. </br> In **Name**, enter **myPublicIP**. </br> Select **OK**. |
+
+6. Select the **Subnet** tab, or select the **Next: Subnet** button at the bottom of the page.
+
+7. In the **Subnet** tab, select **myVNet** in the **Virtual network** pull-down.
+
+8. Check the box next to **mySubnet**.
+
+9. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+
+10. Select **Create**.
+
+## Virtual machine
+
+In this section, you'll create a virtual machine to test the NAT gateway and verify the public IP address of the outbound connection.
+
+1. On the upper-left side of the portal, select **Create a resource** > **Compute** > **Virtual machine**.
+
+2. In the **Create a virtual machine** page in the **Basics** tab, enter, or select the following information:
+
+ | **Setting** | **Value** |
+ | -- | |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroupNAT**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM**. |
+ | Region | Select **(Europe) West Europe**. |
+ | Availability options | Select **No infrastructure redundancy required**. |
+ | Image | Select **Windows Server 2019 Datacenter - Gen2**. |
+ | Size | Select **Standard_DS1_v2**. |
+ | **Administrator account** | |
+ | Username | Enter a username for the virtual machine. |
+ | Password | Enter a password. |
+ | Confirm password | Confirm password. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
+
+3. Select the **Disks** tab, or select the **Next: Disks** button at the bottom of the page.
+
+4. Leave the default in the **Disks** tab.
+
+5. Select the **Networking** tab, or select the **Next: Networking** button at the bottom of the page.
+
+6. In the **Networking** tab, enter or select the following information:
+
+ | **Setting** | **Value** |
+ | -- | |
+ | **Network interface** | |
+ | Virtual network | Select **myVNet**. |
+ | Subnet | Select **mySubnet**. |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Basic**. |
+ | Public inbound ports | Select **None**. |
+
+7. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page.
+
+8. Select **Create**.
+
+## Test NAT gateway
+
+In this section, we'll test the NAT gateway. We'll first discover the public IP of the NAT gateway. We'll then connect to the test virtual machine and verify the outbound connection through the NAT gateway.
+
+1. Find the public IP address for the NAT gateway on the **Overview** screen. Select **All services** in the left-hand menu, select **All resources**, and then select **myPublicIP**.
+
+2. Make note of the public IP address:
+
+ :::image type="content" source="./media/tutorial-create-nat-gateway-portal/find-public-ip.png" alt-text="Discover public IP address of NAT gateway" border="true":::
+
+3. Select **All services** in the left-hand menu, select **All resources**, and then from the resources list, select **myVM** that is located in the **myResourceGroupNAT** resource group.
+
+4. On the **Overview** page, select **Connect**, then **Bastion**.
+
+5. Select the blue **Use Bastion** button.
+
+6. Enter the username and password entered during VM creation.
+
+7. Open **Internet Explorer** on **myTestVM**.
+
+8. Enter **https://whatsmyip.com** in the address bar.
+
+9. Verify the IP address displayed matches the NAT gateway address you noted in the previous step:
+
+ :::image type="content" source="./media/tutorial-create-nat-gateway-portal/my-ip.png" alt-text="Internet Explorer showing external outbound IP" border="true":::
+
+## Clean up resources
+
+If you're not going to continue to use this application, delete
+the virtual network, virtual machine, and NAT gateway with the following steps:
+
+1. From the left-hand menu, select **Resource groups**.
+
+2. Select the **myResourceGroupNAT** resource group.
+
+3. Select **Delete resource group**.
+
+4. Enter **myResourceGroupNAT** and select **Delete**.
+
+## Next steps
+
+For more information on Azure Virtual Network NAT, see:
+> [!div class="nextstepaction"]
+> [Virtual Network NAT overview](nat-overview.md)
virtual-network Quickstart Create Nat Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-powershell.md
+
+ Title: 'Quickstart: Create a NAT gateway - PowerShell'
+
+description: Get started creating a NAT gateway using Azure PowerShell.
+++++ Last updated : 03/09/2021+++
+# Quickstart: Create a NAT gateway using Azure PowerShell
+
+This quickstart shows you how to use Azure Virtual Network NAT service. You'll create a NAT gateway to provide outbound connectivity for a virtual machine in Azure.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure PowerShell installed locally or Azure Cloud Shell
+
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+
+## Create a resource group
+
+Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). An Azure resource group is a logical container into which Azure resources are deployed and managed.
+
+The following example creates a resource group named **myResourceGroupNAT** in the **eastus2** location:
+
+```azurepowershell-interactive
+$rsg = @{
+ Name = 'myResourceGroupNAT'
+ Location = 'eastus2'
+}
+New-AzResourceGroup @rsg
+```
+## Create the NAT gateway
+
+In this section we create the NAT gateway and supporting resources.
+
+* To access the Internet, you need one or more public IP addresses for the NAT gateway. Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create a public IP address resource named **myPublicIP** in **myResourceGroupNAT**.
+
+* Create a global Azure NAT gateway with [New-AzNatGateway](/powershell/module/az.network/new-aznatgateway). The result of this command will create a gateway resource named **myNATgateway** that uses the public IP address **myPublicIP**. The idle timeout is set to 10 minutes.
+
+* Create a virtual network named **myVnet** with a subnet named **mySubnet** using [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) in the **myResourceGroup** using [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). The IP address space for the virtual network is **10.1.0.0/16**. The subnet within the virtual network is **10.1.0.0/24**.
+
+* Create an Azure Bastion host named **myBastionHost** to access the virtual machine. Use [New-AzBastion](/powershell/module/az.network/new-azbastion) to create the bastion host. Create a public IP address for the bastion host with [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress).
+
+```azurepowershell-interactive
+## Create public IP address for NAT gateway ##
+$ip = @{
+ Name = 'myPublicIP'
+ ResourceGroupName = 'myResourceGroupNAT'
+ Location = 'eastus2'
+ Sku = 'Standard'
+ AllocationMethod = 'Static'
+}
+$publicIP = New-AzPublicIpAddress @ip
+
+## Create NAT gateway resource ##
+$nat = @{
+ ResourceGroupName = 'myResourceGroupNAT'
+ Name = 'myNATgateway'
+ IdleTimeoutInMinutes = '10'
+ Sku = 'Standard'
+ Location = 'eastus2'
+ PublicIpAddress = $publicIP
+}
+$natGateway = New-AzNatGateway @nat
+
+## Create subnet config and associate NAT gateway to subnet##
+$subnet = @{
+ Name = 'mySubnet'
+ AddressPrefix = '10.1.0.0/24'
+ NatGateway = $natGateway
+}
+$subnetConfig = New-AzVirtualNetworkSubnetConfig @subnet
+
+## Create Azure Bastion subnet. ##
+$bastsubnet = @{
+ Name = 'AzureBastionSubnet'
+ AddressPrefix = '10.1.1.0/24'
+}
+$bastsubnetConfig = New-AzVirtualNetworkSubnetConfig @bastsubnet
+
+## Create the virtual network ##
+$net = @{
+ Name = 'myVNet'
+ ResourceGroupName = 'myResourceGroupNAT'
+ Location = 'eastus2'
+ AddressPrefix = '10.1.0.0/16'
+ Subnet = $subnetConfig,$bastsubnetConfig
+}
+$vnet = New-AzVirtualNetwork @net
+
+## Create public IP address for bastion host. ##
+$ip = @{
+ Name = 'myBastionIP'
+ ResourceGroupName = 'myResourceGroupNAT'
+ Location = 'eastus2'
+ Sku = 'Standard'
+ AllocationMethod = 'Static'
+}
+$publicip = New-AzPublicIpAddress @ip
+
+## Create bastion host ##
+$bastion = @{
+ ResourceGroupName = 'myResourceGroupNAT'
+ Name = 'myBastion'
+ PublicIpAddress = $publicip
+ VirtualNetwork = $vnet
+}
+New-AzBastion @bastion -AsJob
+
+```
+
+## Virtual machine
+
+In this section, you'll create a virtual machine to test the NAT gateway and verify the public IP address of the outbound connection.
+
+* Create a network interface with [New-AzNetworkInterface](/powershell/module/az.network/new-aznetworkinterface).
+
+* Set an administrator username and password for the VM with [Get-Credential](/powershell/module/microsoft.powershell.security/get-credential).
+
+* Create the virtual machine with:
+ * [New-AzVM](/powershell/module/az.compute/new-azvm)
+ * [New-AzVMConfig](/powershell/module/az.compute/new-azvmconfig)
+ * [Set-AzVMOperatingSystem](/powershell/module/az.compute/set-azvmoperatingsystem)
+ * [Set-AzVMSourceImage](/powershell/module/az.compute/set-azvmsourceimage)
+ * [Add-AzVMNetworkInterface](/powershell/module/az.compute/add-azvmnetworkinterface)
+
+```azurepowershell-interactive
+# Set the administrator and password for the VMs. ##
+$cred = Get-Credential
+
+## Place the virtual network into a variable. ##
+$vnet = Get-AzVirtualNetwork -Name 'myVNet' -ResourceGroupName 'myResourceGroupNAT'
+
+## Create network interface for virtual machine. ##
+$nic = @{
+ Name = "myNicVM"
+ ResourceGroupName = 'myResourceGroupNAT'
+ Location = 'eastus2'
+ Subnet = $vnet.Subnets[0]
+}
+$nicVM = New-AzNetworkInterface @nic
+
+## Create a virtual machine configuration for VMs ##
+$vmsz = @{
+ VMName = "myVM"
+ VMSize = 'Standard_DS1_v2'
+}
+$vmos = @{
+ ComputerName = "myVM"
+ Credential = $cred
+}
+$vmimage = @{
+ PublisherName = 'MicrosoftWindowsServer'
+ Offer = 'WindowsServer'
+ Skus = '2019-Datacenter'
+ Version = 'latest'
+}
+$vmConfig = New-AzVMConfig @vmsz `
+ | Set-AzVMOperatingSystem @vmos -Windows `
+ | Set-AzVMSourceImage @vmimage `
+ | Add-AzVMNetworkInterface -Id $nicVM.Id
+
+## Create the virtual machine for VMs ##
+$vm = @{
+ ResourceGroupName = 'myResourceGroupNAT'
+ Location = 'eastus2'
+ VM = $vmConfig
+}
+New-AzVM @vm
+
+```
+
+Wait for the virtual machine creation to complete before moving on to the next section.
+
+## Test NAT gateway
+
+In this section, we'll test the NAT gateway. We'll first discover the public IP of the NAT gateway. We'll then connect to the test virtual machine and verify the outbound connection through the NAT gateway.
+
+1. Sign in to the [Azure portal](https://portal.azure.com)
+
+1. Find the public IP address for the NAT gateway on the **Overview** screen. Select **All services** in the left-hand menu, select **All resources**, and then select **myPublicIP**.
+
+2. Make note of the public IP address:
+
+ :::image type="content" source="./media/tutorial-create-nat-gateway-portal/find-public-ip.png" alt-text="Discover public IP address of NAT gateway" border="true":::
+
+3. Select **All services** in the left-hand menu, select **All resources**, and then from the resources list, select **myVM** that is located in the **myResourceGroupNAT** resource group.
+
+4. On the **Overview** page, select **Connect**, then **Bastion**.
+
+5. Select the blue **Use Bastion** button.
+
+6. Enter the username and password entered during VM creation.
+
+7. Open **Internet Explorer** on **myTestVM**.
+
+8. Enter **https://whatsmyip.com** in the address bar.
+
+9. Verify the IP address displayed matches the NAT gateway address you noted in the previous step:
+
+ :::image type="content" source="./media/tutorial-create-nat-gateway-portal/my-ip.png" alt-text="Internet Explorer showing external outbound IP" border="true":::
+
+## Clean up resources
+
+If you're not going to continue to use this application, delete
+the virtual network, virtual machine, and NAT gateway with the following steps:
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name 'myResourceGroupNAT' -Force
+```
+
+## Next steps
+
+For more information on Azure Virtual Network NAT, see:
+> [!div class="nextstepaction"]
+> [Virtual Network NAT overview](nat-overview.md)
virtual-network Region Move Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/region-move-nat-gateway.md
After you have moved all the resources associated with the original instance of
## Test NAT gateway in new region
-For steps on how to test the NAT gateway, see [Tutorial: Create a NAT gateway - Azure portal](tutorial-create-nat-gateway-portal.md#test-nat-gateway).
+For steps on how to test the NAT gateway, see [Quickstart: Create a NAT gateway - Azure portal](quickstart-create-nat-gateway-portal.md#test-nat-gateway).
## Delete old instance of NAT gateway
virtual-network Troubleshoot Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat.md
Check the following configurations to ensure that NAT gateway can be used to dir
[Virtual Network NAT gateway](./nat-overview.md#virtual-network-nat-basics) supports IPv4 UDP and TCP protocols. ICMP is not supported and is expected to fail. To validate end-to-end connectivity of NAT gateway, follow these steps:
-1. Validate that your [NAT gateway public IP address is being used](./tutorial-create-nat-gateway-portal.md#test-nat-gateway).
+1. Validate that your [NAT gateway public IP address is being used](./quickstart-create-nat-gateway-portal.md#test-nat-gateway).
2. Conduct TCP connection tests and UDP-specific application layer tests. 3. Look at NSG flow logs to analyze outbound traffic flows from NAT gateway.
The order of precedence for internet routing configurations is as follows:
Virtual appliance UDR / VPN ExpressRoute >> NAT gateway >> default system Test and resolve issues with a virtual appliance UDR or VPN ExpressRoute overriding your NAT gateway by:
-1. [Testing that the NAT gateway public IP](./tutorial-create-nat-gateway-portal.md#test-nat-gateway) is used for outbound traffic. If a different IP is being used, it could be because of a custom UDR, follow the remaining steps on how to check for and remove custom UDRs.
+1. [Testing that the NAT gateway public IP](./quickstart-create-nat-gateway-portal.md#test-nat-gateway) is used for outbound traffic. If a different IP is being used, it could be because of a custom UDR, follow the remaining steps on how to check for and remove custom UDRs.
2. Check for UDRs in the virtual networkΓÇÖs route table, refer to [view route tables](../manage-route-table.md#view-route-tables). 3. Remove the UDR from the route table by following [create, change, or delete an Azure route table](../manage-route-table.md#change-a-route-table).
virtual-network Network Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/network-overview.md
This table lists the methods that you can use to create a NAT gateway resource.
| Method | Description | | | -- |
-| [Azure portal](./nat-gateway/tutorial-create-nat-gateway-portal.md) | Creates a virtual network, subnet, public IP, NAT gateway, and a virtual machine to test the NAT gateway resource. |
-| [Azure PowerShell](./nat-gateway/tutorial-create-nat-gateway-powershell.md) | Includes the use of [New-AzNatGateway](/powershell/module/az.network/new-aznatgateway) to create a NAT gateway resource. Creates a virtual network, subnet, public IP, NAT gateway, and a virtual machine to test the NAT gateway resource. |
-| [Azure CLI](./nat-gateway/tutorial-create-nat-gateway-cli.md) | Includes the use of [az network nat gateway create](/cli/azure/network/nat#az-network-nat-gateway-create) to create a NAT gateway resource. Creates a virtual network, subnet, public IP, NAT gateway, and a virtual machine to test the NAT gateway resource. |
+| [Azure portal](./nat-gateway/quickstart-create-nat-gateway-portal.md) | Creates a virtual network, subnet, public IP, NAT gateway, and a virtual machine to test the NAT gateway resource. |
+| [Azure PowerShell](./nat-gateway/quickstart-create-nat-gateway-powershell.md) | Includes the use of [New-AzNatGateway](/powershell/module/az.network/new-aznatgateway) to create a NAT gateway resource. Creates a virtual network, subnet, public IP, NAT gateway, and a virtual machine to test the NAT gateway resource. |
+| [Azure CLI](./nat-gateway/quickstart-create-nat-gateway-cli.md) | Includes the use of [az network nat gateway create](/cli/azure/network/nat#az-network-nat-gateway-create) to create a NAT gateway resource. Creates a virtual network, subnet, public IP, NAT gateway, and a virtual machine to test the NAT gateway resource. |
| [Template](./nat-gateway/quickstart-create-nat-gateway-template.md) | Creates a virtual network, subnet, public IP, and NAT gateway resource. | ## Azure Bastion
There are also tutorials on how to load balance VMs and create highly available
- Learn how to configure [user-defined routes and IP forwarding](../virtual-network/virtual-networks-udr-overview.md). - Learn how to configure [VNet to VNet connections](../vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md). - Learn how to [Troubleshoot routes](../virtual-network/diagnose-network-routing-problem.md).-- Learn more about [Virtual machine network bandwidth](../virtual-network/virtual-machine-network-throughput.md).
+- Learn more about [Virtual machine network bandwidth](../virtual-network/virtual-machine-network-throughput.md).
virtual-wan Monitor Virtual Wan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitor-virtual-wan.md
$MetricInformation.Data
**Metric Name** - Refers to the name of the metric you are querying, which in this case is called 'VirtualHubDataProcessed'. This metric shows all the data that the Virtual Hub Router has processed in the selected time period of the hub.
-**Time Grain** - Refers to the frequency at which you want see the aggregation of. In the current command, you will see a selected aggregated unit per 5 mins. You can select ΓÇô 5M/15M/30M/1H/6H/12H and 1D.
+**Time Grain** - Refers to the frequency at which you want to see the aggregation. In the current command, you will see a selected aggregated unit per 5 mins. You can select ΓÇô 5M/15M/30M/1H/6H/12H and 1D.
**Start Time and End Time** - This time is based on UTC, so please ensure that you are entering UTC values when inputting these parameters. If these parameters are not used, by default the past one hour's worth of data is shown.
The following metrics are available for Azure ExpressRoute gateways:
| Metric | Description| | | |
-| **BitsInPerSecond** | Bits ingressing Azure per second.|
-| **BitsOutPerSecond** | Bits egressing Azure per second. |
+| **BitsInPerSecond** | Bits per second ingressing Azure through the ExpressRoute Gateway. |
+| **BitsOutPerSecond** | Bits per second egressing Azure through the ExpressRoute Gateway |
| **CPU Utilization** | CPU Utilization of the ExpressRoute Gateway.|
-| **Packets per second** | Packet count of ExpressRoute Gateway.|
+| **Packets per second** | Total Packets received on ExpressRoute Gateway per second.|
| **Count of routes advertised to peer**| Count of Routes Advertised to Peer by ExpressRoute Gateway. | | **Count of routes learned from peer**| Count of Routes Learned from Peer by ExpressRoute Gateway.| | **Frequency of routes changed** | Frequency of Route changes in ExpressRoute Gateway.|
vpn-gateway Point To Site Vpn Client Configuration Radius Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-configuration-radius-password.md
+
+ Title: 'Configure a VPN client for P2S RADIUS: password-auth'
+
+description: Learn how to configure a VPN client for point-to-site VPN configurations that use RADIUS username/password authentication.
++++ Last updated : 05/09/2022+
+# Configure a VPN client for point-to-site: RADIUS - password authentication
+
+To connect to a virtual network over point-to-site (P2S), you need to configure the client device that you'll connect from. You can create P2S VPN connections from Windows, macOS, and Linux client devices. This article helps you create and install the VPN client configuration for username/password RADIUS authentication.
+
+When you're using RADIUS authentication, there are multiple authentication options: username/password authentication, certificate authentication, and other authentication types. To configure a VPN client, you use client configuration files that contain the required settings. The VPN client configuration is different for each type of authentication.
+
+>[!NOTE]
+> [!INCLUDE [TLS](../../includes/vpn-gateway-tls-change.md)]
+>
+
+## Workflow
+
+The configuration workflow for P2S RADIUS authentication is as follows:
+
+1. [Set up the Azure VPN gateway for P2S connectivity](point-to-site-how-to-radius-ps.md).
+1. [Set up your RADIUS server for authentication](point-to-site-how-to-radius-ps.md#radius).
+1. **Obtain the VPN client configuration for the authentication option of your choice and use it to set up the VPN client** (this article).
+1. [Complete your P2S configuration and connect](point-to-site-how-to-radius-ps.md).
+
+>[!IMPORTANT]
+>If there are any changes to the point-to-site VPN configuration after you generate the VPN client configuration profile, such as the VPN protocol type or authentication type, you must generate and install a new VPN client configuration on your users' devices.
+>
+
+You can configure username/password authentication to either use Active Directory or not use Active Directory. With either scenario, make sure that all connecting users have username/password credentials that can be authenticated through RADIUS.
+
+When you configure username/password authentication, you can only create a configuration for the EAP-MSCHAPv2 username/password authentication protocol. In the commands, `-AuthenticationMethod` is `EapMSChapv2`.
+
+## Generate VPN client configuration files
+
+You can generate the VPN client configuration files by using the Azure portal, or by using Azure PowerShell.
+
+### Azure portal
+
+1. Navigate to the virtual network gateway.
+1. Click **Point-to-Site configuration**.
+1. Click **Download VPN client**.
+1. Select the client and fill out any information that is requested.
+1. Click **Download** to generate the .zip file.
+1. The .zip file will download, typically to your Downloads folder.
+
+### Azure PowerShell
+
+Generate VPN client configuration files for use with username/password authentication. You can generate the VPN client configuration files by using the following command:
+
+```azurepowershell-interactive
+New-AzVpnClientConfiguration -ResourceGroupName "TestRG" -Name "VNet1GW" -AuthenticationMethod "EapMSChapv2"
+```
+
+Running the command returns a link. Copy and paste the link to a web browser to download **VpnClientConfiguration.zip**. Unzip the file to view the following folders:
+
+* **WindowsAmd64** and **WindowsX86**: These folders contain the Windows 64-bit and 32-bit installer packages, respectively.
+* **Generic**: This folder contains general information that you use to create your own VPN client configuration. You don't need this folder for username/password authentication configurations.
+* **Mac**: If you configured IKEv2 when you created the virtual network gateway, you see a folder named **Mac** that contains a **mobileconfig** file. You use this file to configure Mac clients.
+
+If you already created client configuration files, you can retrieve them by using the `Get-AzVpnClientConfiguration` cmdlet. But if you make any changes to your P2S VPN configuration, such as the VPN protocol type or authentication type, the configuration isn’t updated automatically. You must run the `New-AzVpnClientConfiguration` cmdlet to create a new configuration download.
+
+To retrieve previously generated client configuration files, use the following command:
+
+```azurepowershell-interactive
+Get-AzVpnClientConfiguration -ResourceGroupName "TestRG" -Name "VNet1GW"
+```
+
+## Windows VPN client
+
+You can use the same VPN client configuration package on each Windows client computer, as long as the version matches the architecture for the client. For the list of client operating systems that are supported, see the [FAQ](vpn-gateway-vpn-faq.md#P2S).
+
+Use the following steps to configure the native Windows VPN client for certificate authentication:
+
+1. Select the VPN client configuration files that correspond to the architecture of the Windows computer. For a 64-bit processor architecture, choose the **VpnClientSetupAmd64** installer package. For a 32-bit processor architecture, choose the **VpnClientSetupX86** installer package.
+
+1. To install the package, double-click it. If you see a SmartScreen pop-up, select **More info** > **Run anyway**.
+
+1. On the client computer, browse to **Network Settings** and select **VPN**. The VPN connection shows the name of the virtual network that it connects to.
+
+## Mac (macOS) VPN client
+
+1. Select the **VpnClientSetup mobileconfig** file and send it to each of the users. You can use email or another method.
+
+1. Locate the **mobileconfig** file on the Mac.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-password/mac/mobile-config-file.png" alt-text="Screenshot shows location of the mobile config file." lightbox="./media/point-to-site-vpn-client-config-radius-password/mac/mobile-config-file.png":::
+
+1. Optional Step - If you want to specify a custom DNS, add the following lines to the **mobileconfig** file:
+
+ ```xml
+ <key>DNS</key>
+ <dict>
+ <key>ServerAddresses</key>
+ <array>
+ <string>10.0.0.132</string>
+ </array>
+ <key>SupplementalMatchDomains</key>
+ <array>
+ <string>TestDomain.com</string>
+ </array>
+ </dict>
+ ```
+
+1. Double-click the profile to install it, and select **Continue**. The profile name is the same as the name of your virtual network.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-password/mac/install.png" alt-text="Screenshot shows profile install with continue selected." lightbox="./media/point-to-site-vpn-client-config-radius-password/mac/install.png":::
+
+1. Select **Continue** to trust the sender of the profile and proceed with the installation.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-password/mac/continue.png" alt-text="Screenshot shows continue message." lightbox="./media/point-to-site-vpn-client-config-radius-password/mac/continue.png":::
+
+1. During profile installation, you can specify the username and password for VPN authentication. It's not mandatory to enter this information. If you do, the information is saved and automatically used when you initiate a connection. Select **Install** to proceed.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-password/mac/vpn-settings.png" alt-text="Screenshot shows enter settings for username and password." lightbox="./media/point-to-site-vpn-client-config-radius-password/mac/vpn-settings.png":::
+
+1. Enter a username and password for the privileges that are required to install the profile on your computer. Select **OK**.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-password/mac/user-name.png" alt-text="Screenshot shows enter settings for username and password privileges." lightbox="./media/point-to-site-vpn-client-config-radius-password/mac/user-name.png":::
+
+1. After the profile is installed, it's visible in the **Profiles** dialog box. You can also open this dialog box later from **System Preferences**.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-password/mac/system-preferences.png" alt-text="Screenshot shows profiles dialog box." lightbox="./media/point-to-site-vpn-client-config-radius-password/mac/system-preferences.png":::
+1. To access the VPN connection, open the **Network** dialog box from **System Preferences**.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-password/mac/network.png" alt-text="Screenshot shows network dialog box." lightbox="./media/point-to-site-vpn-client-config-radius-password/mac/network.png":::
+
+1. The VPN connection appears as **IkeV2-VPN**. You can change the name by updating the **mobileconfig** file.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-password/mac/connection-name.png" alt-text="Screenshot shows connection name." lightbox="./media/point-to-site-vpn-client-config-radius-password/mac/connection-name.png":::
+
+1. Select **Authentication Settings**. Select **Username** in the list and enter your credentials. If you entered the credentials earlier, then **Username** is automatically chosen in the list and the username and password are pre-populated. Select **OK** to save the settings.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-password/mac/authentication.png" alt-text="Screenshot that shows the Authentication settings drop-down with Username selected." lightbox="./media/point-to-site-vpn-client-config-radius-password/mac/authentication.png":::
+
+1. Back in the **Network** dialog box, select **Apply** to save the changes. To initiate the connection, select **Connect**.
+
+## Linux VPN client - strongSwan
+
+The following instructions were created through strongSwan 5.5.1 on Ubuntu 17.0.4. Actual screens might be different, depending on your version of Linux and strongSwan.
+
+1. Open the **Terminal** to install **strongSwan** and its Network Manager by running the command in the example. If you receive an error that's related to `libcharon-extra-plugins`, replace it with `strongswan-plugin-eap-mschapv2`.
+
+ ```Terminal
+ sudo apt-get install strongswan libcharon-extra-plugins moreutils iptables-persistent network-manager-strongswan
+ ```
+
+1. Select the **Network Manager** icon (up-arrow/down-arrow), and select **Edit Connections**.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-password/linux/edit-connection.png" alt-text="Edit connections in Network Manager." lightbox="./media/point-to-site-vpn-client-config-radius-password/linux/edit-connection.png":::
+
+1. Select the **Add** button to create a new connection.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-password/linux/add-connection.png" alt-text="Screenshot shows add connection for network connections." lightbox="./media/point-to-site-vpn-client-config-radius-password/linux/add-connection.png":::
+
+1. Select **IPsec/IKEv2 (strongswan)** from the drop-down menu, and then select **Create**. You can rename your connection in this step.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-password/linux/add-ikev2.png" alt-text="Screenshot shows select connection type." lightbox="./media/point-to-site-vpn-client-config-radius-password/linux/add-ikev2.png":::
+
+1. Open the **VpnSettings.xml** file from the **Generic** folder of the downloaded client configuration files. Find the tag called `VpnServer` and copy the name, beginning with `azuregateway` and ending with `.cloudapp.net`.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-password/linux/settings-file.png" alt-text="Screenshot shows contents of the VpnSettings.xml file." lightbox="./media/point-to-site-vpn-client-config-radius-password/linux/settings-file.png":::
+
+1. Paste this name into the **Address** field of your new VPN connection in the **Gateway** section. Next, select the folder icon at the end of the **Certificate** field, browse to the **Generic** folder, and select the **VpnServerRoot** file.
+
+1. In the **Client** section of the connection, select **EAP** for **Authentication**, and enter your username and password. You might have to select the lock icon on the right to save this information. Then, select **Save**.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-password/linux/edit-settings.png" alt-text="Screenshot shows edit connection settings." lightbox="./media/point-to-site-vpn-client-config-radius-password/linux/edit-settings.png":::
+
+1. Select the **Network Manager** icon (up-arrow/down-arrow) and hover over **VPN Connections**. You see the VPN connection that you created. To initiate the connection, select it.
+
+ :::image type="content" source="./media/point-to-site-vpn-client-config-radius-password/linux/connect.png" alt-text="Screenshot shows connect." lightbox="./media/point-to-site-vpn-client-config-radius-password/linux/connect.png":::
+
+## Next steps
+
+Return to the article to [complete your P2S configuration](point-to-site-how-to-radius-ps.md).
+
+For P2S troubleshooting information, see [Troubleshooting Azure point-to-site connections](vpn-gateway-troubleshoot-vpn-point-to-site-connection-problems.md).