Updates from: 08/26/2021 03:04:55
<
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Disable Email Verification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/disable-email-verification.md
Previously updated : 12/10/2020 Last updated : 08/25/2021
Follow these steps to disable email verification:
1. Use the **Directory + subscription** filter in the top menu to select the directory that contains your Azure AD B2C tenant. 1. In the left menu, select **Azure AD B2C**. Or, select **All services** and search for and select **Azure AD B2C**. 1. Select **User flows**.
-1. Select the user flow for which you want to disable email verification. For example, *B2C_1_signinsignup*.
+1. Select the user flow for which you want to disable email verification.
1. Select **Page layouts**. 1. Select **Local account sign-up page**. 1. Under **User attributes**, select **Email Address**.
-1. In the **REQUIRES VERIFICATION** drop-down, select **No**.
+1. In the **Requires Verification** drop-down, select **No**.
1. Select **Save**. Email verification is now disabled for this user flow. ::: zone-end
active-directory-b2c Embedded Login https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/embedded-login.md
zone_pivot_groups: b2c-policy-type
-# Embedded sing-up or sign-in experience
+# Embedded sign-up or sign-in experience
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
active-directory-b2c Identity Provider Generic Saml Options https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/identity-provider-generic-saml-options.md
The following example demonstrates an authorization request with **AllowCreate**
You can force the external SAML IDP to prompt the user for authentication by passing the `ForceAuthN` property in the SAML authentication request. Your identity provider must also support this property.
-The `ForceAuthN` property is a Boolean `true` or `false` value. By default, Azure AD B2C sets the ForceAuthN value to `false`. You can change this behavior by setting ForceAuthN to `true` so that when there is a valid session, the initiating request forces authentication (for example, by sending `prompt=login` in the OpenID Connect request).
+The `ForceAuthN` property is a Boolean `true` or `false` value. By default, Azure AD B2C sets the ForceAuthN value to `false`. If the session is then reset (for example by using the `prompt=login` in OIDC) then the ForceAuthN value will be set to `true`. Setting the metadata item as shown below will force the value for all requests to the external IDP.
The following example shows the `ForceAuthN` property set to `true`:
active-directory-b2c Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/page-layout.md
Previously updated : 08/03/2021 Last updated : 08/25/2021
Azure AD B2C page layout uses the following versions of the [jQuery library](htt
- Added support for multiple sign-up links. - Added support for user input validation according to the predicate rules defined in the policy.
+- When the [sign-in option](sign-in-options.md) is set to Email, the sign-in header presents "Sign in with your sign in name". The username field presents "Sign in name". For more information, see [localization](localization-string-ids.md#sign-up-or-sign-in-page-elements).
**1.2.0**
active-directory-b2c Saml Identity Provider Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/saml-identity-provider-technical-profile.md
The **OutputClaimsTransformations** element may contain a collection of **Output
| IncludeKeyInfo | No | Indicates whether the SAML authentication request contains the public key of the certificate when the binding is set to `HTTP-POST`. Possible values: `true` or `false`. | | IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. | |SingleLogoutEnabled| No| Indicates whether during sign-in the technical profile attempts to sign out from federated identity providers. For more information, see [Azure AD B2C session sign-out](session-behavior.md#sign-out). Possible values: `true` (default), or `false`.|
-|ForceAuthN| No| Passes the ForceAuthN value in the SAML authentication request to determine if the external SAML IDP will be forced to prompt the user for authentication. By default, Azure AD B2C sets the ForceAuthN value to `false`. You can change this behavior by setting ForceAuthN to `true` so that when there is a valid session, the initiating request forces authentication (for example, by sending `prompt=login` in the OpenID Connect request). Possible values: `true` or `false`.|
+|ForceAuthN| No| Passes the ForceAuthN value in the SAML authentication request to determine if the external SAML IDP will be forced to prompt the user for authentication. By default, Azure AD B2C sets the ForceAuthN value to false on initial login. If the session is then reset (for example by using the `prompt=login` in OIDC) then the ForceAuthN value will be set to `true`. Setting the metadata item as shown below will force the value for all requests to the external IDP. Possible values: `true` or `false`.|
## Cryptographic keys
active-directory Application Provisioning Config Problem Scim Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/application-provisioning-config-problem-scim-compatibility.md
Previously updated : 05/11/2021 Last updated : 08/25/2021
In the table below, any item marked as fixed means that the proper behavior can
## Flags to alter the SCIM behavior Use the flags below in the tenant URL of your application in order to change the default SCIM client behavior. Use the following URL to update PATCH behavior and ensure SCIM compliance. The flag will alter the following behaviors: - Requests made to disable users
active-directory Functions For Customizing Application Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
Previously updated : 07/29/2021 Last updated : 08/25/2021
The syntax for Expressions for Attribute Mappings is reminiscent of Visual Basic
## List of Functions
-[Append](#append) &nbsp;&nbsp;&nbsp;&nbsp; [AppRoleAssignmentsComplex](#approleassignmentscomplex) &nbsp;&nbsp;&nbsp;&nbsp; [BitAnd](#bitand) &nbsp;&nbsp;&nbsp;&nbsp; [CBool](#cbool) &nbsp;&nbsp;&nbsp;&nbsp; [CDate](#cdate) &nbsp;&nbsp;&nbsp;&nbsp; [Coalesce](#coalesce) &nbsp;&nbsp;&nbsp;&nbsp; [ConvertToBase64](#converttobase64) &nbsp;&nbsp;&nbsp;&nbsp; [ConvertToUTF8Hex](#converttoutf8hex) &nbsp;&nbsp;&nbsp;&nbsp; [Count](#count) &nbsp;&nbsp;&nbsp;&nbsp; [CStr](#cstr) &nbsp;&nbsp;&nbsp;&nbsp; [DateAdd](#dateadd) &nbsp;&nbsp;&nbsp;&nbsp; [DateFromNum](#datefromnum) &nbsp;[FormatDateTime](#formatdatetime) &nbsp;&nbsp;&nbsp;&nbsp; [Guid](#guid) &nbsp;&nbsp;&nbsp;&nbsp; [IgnoreFlowIfNullOrEmpty](#ignoreflowifnullorempty) &nbsp;&nbsp;&nbsp;&nbsp;[IIF](#iif) &nbsp;&nbsp;&nbsp;&nbsp;[InStr](#instr) &nbsp;&nbsp;&nbsp;&nbsp; [IsNull](#isnull) &nbsp;&nbsp;&nbsp;&nbsp; [IsNullOrEmpty](#isnullorempty) &nbsp;&nbsp;&nbsp;&nbsp; [IsPresent](#ispresent) &nbsp;&nbsp;&nbsp;&nbsp; [IsString](#isstring) &nbsp;&nbsp;&nbsp;&nbsp; [Item](#item) &nbsp;&nbsp;&nbsp;&nbsp; [Join](#join) &nbsp;&nbsp;&nbsp;&nbsp; [Left](#left) &nbsp;&nbsp;&nbsp;&nbsp; [Mid](#mid) &nbsp;&nbsp;&nbsp;&nbsp; [NormalizeDiacritics](#normalizediacritics) &nbsp;&nbsp; &nbsp;&nbsp; [Not](#not) &nbsp;&nbsp;&nbsp;&nbsp; [Now](#now) &nbsp;&nbsp;&nbsp;&nbsp; [NumFromDate](#numfromdate) &nbsp;&nbsp;&nbsp;&nbsp; [RemoveDuplicates](#removeduplicates) &nbsp;&nbsp;&nbsp;&nbsp; [Replace](#replace) &nbsp;&nbsp;&nbsp;&nbsp; [SelectUniqueValue](#selectuniquevalue)&nbsp;&nbsp;&nbsp;&nbsp; [SingleAppRoleAssignment](#singleapproleassignment)&nbsp;&nbsp;&nbsp;&nbsp; [Split](#split)&nbsp;&nbsp;&nbsp;&nbsp;[StripSpaces](#stripspaces) &nbsp;&nbsp;&nbsp;&nbsp; [Switch](#switch)&nbsp;&nbsp;&nbsp;&nbsp; [ToLower](#tolower)&nbsp;&nbsp;&nbsp;&nbsp; [ToUpper](#toupper)&nbsp;&nbsp;&nbsp;&nbsp; [Word](#word)
+[Append](#append) &nbsp;&nbsp;&nbsp;&nbsp; [AppRoleAssignmentsComplex](#approleassignmentscomplex) &nbsp;&nbsp;&nbsp;&nbsp; [BitAnd](#bitand) &nbsp;&nbsp;&nbsp;&nbsp; [CBool](#cbool) &nbsp;&nbsp;&nbsp;&nbsp; [CDate](#cdate) &nbsp;&nbsp;&nbsp;&nbsp; [Coalesce](#coalesce) &nbsp;&nbsp;&nbsp;&nbsp; [ConvertToBase64](#converttobase64) &nbsp;&nbsp;&nbsp;&nbsp; [ConvertToUTF8Hex](#converttoutf8hex) &nbsp;&nbsp;&nbsp;&nbsp; [Count](#count) &nbsp;&nbsp;&nbsp;&nbsp; [CStr](#cstr) &nbsp;&nbsp;&nbsp;&nbsp; [DateAdd](#dateadd) &nbsp;&nbsp;&nbsp;&nbsp; [DateDiff](#datediff) &nbsp;&nbsp;&nbsp;&nbsp; [DateFromNum](#datefromnum) &nbsp;[FormatDateTime](#formatdatetime) &nbsp;&nbsp;&nbsp;&nbsp; [Guid](#guid) &nbsp;&nbsp;&nbsp;&nbsp; [IgnoreFlowIfNullOrEmpty](#ignoreflowifnullorempty) &nbsp;&nbsp;&nbsp;&nbsp;[IIF](#iif) &nbsp;&nbsp;&nbsp;&nbsp;[InStr](#instr) &nbsp;&nbsp;&nbsp;&nbsp; [IsNull](#isnull) &nbsp;&nbsp;&nbsp;&nbsp; [IsNullOrEmpty](#isnullorempty) &nbsp;&nbsp;&nbsp;&nbsp; [IsPresent](#ispresent) &nbsp;&nbsp;&nbsp;&nbsp; [IsString](#isstring) &nbsp;&nbsp;&nbsp;&nbsp; [Item](#item) &nbsp;&nbsp;&nbsp;&nbsp; [Join](#join) &nbsp;&nbsp;&nbsp;&nbsp; [Left](#left) &nbsp;&nbsp;&nbsp;&nbsp; [Mid](#mid) &nbsp;&nbsp;&nbsp;&nbsp; [NormalizeDiacritics](#normalizediacritics) &nbsp;&nbsp; &nbsp;&nbsp; [Not](#not) &nbsp;&nbsp;&nbsp;&nbsp; [Now](#now) &nbsp;&nbsp;&nbsp;&nbsp; [NumFromDate](#numfromdate) &nbsp;&nbsp;&nbsp;&nbsp; [RemoveDuplicates](#removeduplicates) &nbsp;&nbsp;&nbsp;&nbsp; [Replace](#replace) &nbsp;&nbsp;&nbsp;&nbsp; [SelectUniqueValue](#selectuniquevalue)&nbsp;&nbsp;&nbsp;&nbsp; [SingleAppRoleAssignment](#singleapproleassignment)&nbsp;&nbsp;&nbsp;&nbsp; [Split](#split)&nbsp;&nbsp;&nbsp;&nbsp;[StripSpaces](#stripspaces) &nbsp;&nbsp;&nbsp;&nbsp; [Switch](#switch)&nbsp;&nbsp;&nbsp;&nbsp; [ToLower](#tolower)&nbsp;&nbsp;&nbsp;&nbsp; [ToUpper](#toupper)&nbsp;&nbsp;&nbsp;&nbsp; [Word](#word)
### Append
Returns a date/time string representing a date to which a specified time interva
The **interval** string must have one of the following values: * yyyy Year
- * q Quarter
* m Month
- * y Day of year
* d Day
- * w Weekday
* ww Week * h Hour * n Minute
The **interval** string must have one of the following values:
`DateAdd("yyyy", 2, CDate([StatusHireDate]))` * **INPUT** (StatusHireDate): 2012-03-16-07:00 * **OUTPUT**: 3/16/2014 7:00:00 AM+
+### DateDiff
+**Function:**
+`DateDiff(interval, date1, date2)`
+
+**Description:**
+This function uses the *interval* parameter to return a number that indicates the difference between the two input dates. It returns
+ * a positive number if date2 > date1,
+ * a negative number if date2 < date1,
+ * 0 if date2 == date1
+
+**Parameters:**
+
+| Name | Required/Optional | Type | Notes |
+| | | | |
+| **interval** |Required | String | Interval of time to use for calculating the difference. |
+| **date1** |Required | DateTime | DateTime representing a valid date. |
+| **date2** |Required | DateTime | DateTime representing a valid date. |
+
+The **interval** string must have one of the following values:
+ * yyyy Year
+ * m Month
+ * d Day
+ * ww Week
+ * h Hour
+ * n Minute
+ * s Second
+
+**Example 1: Compare current date with hire date from Workday with different intervals** <br>
+`DateDiff("d", Now(), CDate([StatusHireDate]))`
+
+| Example | interval | date1 | date2 | output |
+| | | | | |
+| Positive difference in days between two dates | d | 2021-08-18+08:00 | 2021-08-31+08:00 | 13 |
+| Negative difference in days between two dates | d | 8/25/2021 5:41:18 PM | 2012-03-16-07:00 | -3449 |
+| Difference in weeks between two dates | ww | 8/25/2021 5:41:18 PM | 2012-03-16-07:00 | -493 |
+| Difference in months between two dates | m | 8/25/2021 5:41:18 PM | 2012-03-16-07:00 | -113 |
+| Difference in years between two dates | yyyy | 8/25/2021 5:41:18 PM | 2012-03-16-07:00 | -9 |
+| Difference when both dates are same | d | 2021-08-31+08:00 | 2021-08-31+08:00 | 0 |
+| Difference in hours between two dates | h | 2021-08-24 | 2021-08-25 | 24 |
+| Difference in minutes between two dates | n | 2021-08-24 | 2021-08-25 | 1440 |
+| Difference in seconds between two dates | s | 2021-08-24 | 2021-08-25 | 86400 |
+
+**Example 2: Combine DateDiff with IIF function to set attribute value** <br>
+If an account is Active in Workday, set the *accountEnabled* attribute of the user to True only if hire date is within the next 5 days.
+
+```
+Switch([Active], ,
+ "1", IIF(DateDiff("d", Now(), CDate([StatusHireDate])) > 5, "False", "True"),
+ "0", "False")
+```
The NumFromDate function converts a DateTime value to Active Directory format th
**Example:** * Workday example Assuming you want to map the attribute *ContractEndDate* from Workday which is in the format *2020-12-31-08:00* to *accountExpires* field in AD, here is how you can use this function and change the timezone offset to match your locale.
- `NumFromDate(Join("", FormatDateTime([ContractEndDate], ,"yyyy-MM-ddzzz", "yyyy-MM-dd"), "T23:59:59-08:00"))`
+ `NumFromDate(Join("", FormatDateTime([ContractEndDate], ,"yyyy-MM-ddzzz", "yyyy-MM-dd"), " 23:59:59-08:00"))`
* SuccessFactors example Assuming you want to map the attribute *endDate* from SuccessFactors which is in the format *M/d/yyyy hh:mm:ss tt* to *accountExpires* field in AD, here is how you can use this function and change the time zone offset to match your locale.
- `NumFromDate(Join("",FormatDateTime([endDate], ,"M/d/yyyy hh:mm:ss tt","yyyy-MM-dd"),"T23:59:59-08:00"))`
+ `NumFromDate(Join("",FormatDateTime([endDate], ,"M/d/yyyy hh:mm:ss tt","yyyy-MM-dd")," 23:59:59-08:00"))`
active-directory Concept Mfa Licensing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/concept-mfa-licensing.md
Previously updated : 07/22/2021 Last updated : 08/25/2021
The following table details the different ways to get Azure AD Multi-Factor Auth
| [Microsoft 365 Business Premium](https://www.microsoft.com/microsoft-365/business) and [EMS](https://www.microsoft.com/security/business/enterprise-mobility-security) or [Microsoft 365 E3 and E5](https://www.microsoft.com/microsoft-365/enterprise/compare-office-365-plans) | EMS E3, Microsoft 365 E3, and Microsoft 365 Business Premium includes Azure AD Premium P1. EMS E5 or Microsoft 365 E5 includes Azure AD Premium P2. You can use the same Conditional Access features noted in the following sections to provide multi-factor authentication to users. | | [Azure AD Premium P1](../fundamentals/active-directory-get-started-premium.md) | You can use [Azure AD Conditional Access](../conditional-access/howto-conditional-access-policy-all-users-mfa.md) to prompt users for multi-factor authentication during certain scenarios or events to fit your business requirements. | | [Azure AD Premium P2](../fundamentals/active-directory-get-started-premium.md) | Provides the strongest security position and improved user experience. Adds [risk-based Conditional Access](../conditional-access/howto-conditional-access-policy-risk.md) to the Azure AD Premium P1 features that adapts to user's patterns and minimizes multi-factor authentication prompts. |
-| [All Microsoft 365 plans](https://www.microsoft.com/microsoft-365/compare-microsoft-365-enterprise-plans) | Azure AD Multi-Factor Authentication can be enabled all users using [security defaults](../fundamentals/concept-fundamentals-security-defaults.md). Management of Azure AD Multi-Factor Authentication is through the Microsoft 365 portal. For an improved user experience, upgrade to Azure AD Premium P1 or P2 and use Conditional Access. For more information, see [secure Microsoft 365 resources with multi-factor authentication](/microsoft-365/admin/security-and-compliance/set-up-multi-factor-authentication). MFA can also be [enabled on a per-user basis](howto-mfa-userstates.md). |
+| [All Microsoft 365 plans](https://www.microsoft.com/microsoft-365/compare-microsoft-365-enterprise-plans) | Azure AD Multi-Factor Authentication can be enabled all users using [security defaults](../fundamentals/concept-fundamentals-security-defaults.md). Management of Azure AD Multi-Factor Authentication is through the Microsoft 365 portal. For an improved user experience, upgrade to Azure AD Premium P1 or P2 and use Conditional Access. For more information, see [secure Microsoft 365 resources with multi-factor authentication](/microsoft-365/admin/security-and-compliance/set-up-multi-factor-authentication). |
| [Office 365 free](https://www.microsoft.com/microsoft-365/enterprise/compare-office-365-plans)<br>[Azure AD free](../verifiable-credentials/how-to-create-a-free-developer-account.md) | You can use [security defaults](../fundamentals/concept-fundamentals-security-defaults.md) to prompt users for multi-factor authentication as needed but you don't have granular control of enabled users or scenarios, but it does provide that additional security step.<br /> Even when security defaults aren't used to enable multi-factor authentication for everyone, users assigned the *Azure AD Global Administrator* role can be configured to use multi-factor authentication. This feature of the free tier makes sure the critical administrator accounts are protected by multi-factor authentication. | ## Feature comparison of versions
If you don't want to enable Azure AD Multi-Factor Authentication for all users,
* For more information on costs, see [Azure AD pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). * [What is Conditional Access](../conditional-access/overview.md)
+* MFA can also be [enabled on a per-user basis](howto-mfa-userstates.md)
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-mfa-mfasettings.md
Users may have a combination of up to five OATH hardware tokens or authenticator
If users receive phone calls for MFA prompts, you can configure their experience, such as caller ID or voice greeting they hear.
-In the United States, if you haven't configured MFA Caller ID, voice calls from Microsoft come from the following numbers. If using spam filters, make sure to exclude these numbers:
+In the United States, if you haven't configured MFA Caller ID, voice calls from Microsoft come from the following number. If using spam filters, make sure to exclude this number:
-* *+1 (866) 539 4191*
* *+1 (855) 330 8653*
-* *+1 (877) 668 6536*
> [!NOTE] > When Azure AD Multi-Factor Authentication calls are placed through the public telephone network, sometimes the calls are routed through a carrier that doesn't support caller ID. Because of this, caller ID isn't guaranteed, even though Azure AD Multi-Factor Authentication always sends it. This applies both to phone calls and to text messages provided by Azure AD Multi-Factor Authentication. If you need to validate that a text message is from Azure AD Multi-Factor Authentication, see [What SMS short codes are used for sending messages?](multi-factor-authentication-faq.yml#what-sms-short-codes-are-used-for-sending-sms-messages-to-my-users-)
To enable or disable verification methods, complete the following steps:
## Remember Multi-Factor Authentication
-The _remember Multi-Factor Authentication_ feature lets users can bypass subsequent verifications for a specified number of days, after they've successfully signed-in to a device by using Multi-Factor Authentication. To enhance usability and minimize the number of times a user has to perform MFA on the same device, select a duration of 90 days or more.
+The _remember Multi-Factor Authentication_ feature lets users bypass subsequent verifications for a specified number of days, after they've successfully signed-in to a device by using Multi-Factor Authentication. To enhance usability and minimize the number of times a user has to perform MFA on the same device, select a duration of 90 days or more.
> [!IMPORTANT] > If an account or device is compromised, remembering Multi-Factor Authentication for trusted devices can affect security. If a corporate account becomes compromised or a trusted device is lost or stolen, you should [Revoke MFA Sessions](howto-mfa-userdevicesettings.md).
active-directory Troubleshoot Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/troubleshoot-sspr-writeback.md
Previously updated : 08/11/2021 Last updated : 08/25/2021
If you need more granularity, see the [list of Microsoft Azure Datacenter IP Ran
To determine if access to a url and port are restricted in an environment, run the following cmdlet: ```powershell
-Test-NetConnection -ComputerName https://ssprdedicatedsbprodncu.servicebus.windows.net -Port 443
+Test-NetConnection -ComputerName ssprdedicatedsbprodscu.servicebus.windows.net -Port 443
``` Or run the following: ```powershell
-Invoke-WebRequest -Uri https://ssprdedicatedbprodscu.windows.net -Verbose
+Invoke-WebRequest -Uri https://ssprdedicatedbprodscu.servicebus.windows.net -Verbose
``` For more information, see the [connectivity prerequisites for Azure AD Connect](../hybrid/how-to-connect-install-prerequisites.md).
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
Administrators can assign a Conditional Access policy to the following cloud app
- Azure Event Hubs - Azure Service Bus - [Azure SQL Database and Azure Synapse Analytics](../../azure-sql/database/conditional-access-configure.md)-- Dynamics CRM Online
+- Common Data Service
- Microsoft Application Insights Analytics - [Microsoft Azure Information Protection](/azure/information-protection/faqs#i-see-azure-information-protection-is-listed-as-an-available-cloud-app-for-conditional-accesshow-does-this-work) - [Microsoft Azure Management](#microsoft-azure-management)
Administrators can assign a Conditional Access policy to the following cloud app
- Microsoft Cloud App Security - Microsoft Commerce Tools Access Control Portal - Microsoft Commerce Tools Authentication Service-- Microsoft Flow - Microsoft Forms - Microsoft Intune - [Microsoft Intune Enrollment](/intune/enrollment/multi-factor-authentication) - Microsoft Planner-- Microsoft PowerApps
+- Microsoft Power Apps
+- Microsoft Power Automate
- Microsoft Search in Bing - Microsoft StaffHub - Microsoft Stream
Administrators can exclude specific apps from policy if they wish, including the
The following key applications are included in the Office 365 client app:
- - Microsoft Flow
- Microsoft Forms - Microsoft Stream - Microsoft To-Do
The following key applications are included in the Office 365 client app:
- Office Online - Office.com - OneDrive
- - PowerApps
+ - Power Automate
+ - Power Apps
- Skype for Business Online - Sway
active-directory Concept Conditional Access Grant https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-grant.md
The following client apps have been confirmed to support this setting:
- Microsoft OneNote - Microsoft Outlook - Microsoft Planner-- Microsoft PowerApps
+- Microsoft Power Apps
- Microsoft Power BI - Microsoft PowerPoint - Microsoft SharePoint
The following client apps have been confirmed to support this setting:
- Microsoft Power BI - Microsoft PowerPoint - Microsoft SharePoint
+- Microsoft Teams
- Microsoft Word - MultiLine for Intune - Nine Mail - Email & Calendar
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/faqs.md
For more information, see the article, [Conditional Access service dependencies]
## Why are some tabs not working in Microsoft Teams after enabling Conditional Access policies?
-After enabling some Conditional Access policies on the tenant in Microsoft Teams, certain tabs may no longer function in the desktop client as expected. However, the affected tabs function when using the Microsoft Teams web client. The tabs affected may include Power BI, Forms, VSTS, PowerApps, and SharePoint List.
+After enabling some Conditional Access policies on the tenant in Microsoft Teams, certain tabs may no longer function in the desktop client as expected. However, the affected tabs function when using the Microsoft Teams web client. The tabs affected may include Power BI, Forms, VSTS, Power Apps, and SharePoint List.
To see the affected tabs you must use the Teams web client in Edge, Internet Explorer, or Chrome with the Windows 10 Accounts extension installed. Some tabs depend on web authentication, which doesn't work in the Microsoft Teams desktop client when Conditional Access is enabled. Microsoft is working with partners to enable these scenarios. To date, we have enabled scenarios involving Planner, OneNote, and Stream.
active-directory Require Managed Devices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/require-managed-devices.md
Requiring managed devices for cloud app access ties **Azure AD Conditional Acces
- **[Conditional Access in Azure Active Directory](./overview.md)** - This article provides you with a conceptual overview of Conditional Access and the related terminology. - **[Introduction to device management in Azure Active Directory](../devices/overview.md)** - This article gives you an overview of the various options you have to get devices under organizational control. - For Chrome support in **Windows 10 Creators Update (version 1703)** or later, install the [Windows 10 Accounts extension](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji). This extension is required when a Conditional Access policy requires device-specific details.
+- For Firefox support, starting **Firefox 91** in **Windows 10 version 1809 or above**, configure [Windows SSO](https://support.mozilla.org/en-US/kb/windows-sso).
>[!NOTE] > We recommend using Azure AD device based Conditional Access policy to get the best enforcement after initial device authentication. This includes closing sessions if the device falls out of compliance and device code flow.
active-directory Service Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/service-dependencies.md
The below table lists additional service dependencies, where the client apps mus
| | SharePoint | Late-bound | | Outlook groups | Exchange | Early-bound | | | SharePoint | Early-bound |
-| PowerApps | Microsoft Azure Management (portal and API) | Early-bound |
+| Power Apps | Microsoft Azure Management (portal and API) | Early-bound |
| | Windows Azure Active Directory | Early-bound | | Project | Dynamics CRM | Early-bound | | Skype for Business | Exchange | Early-bound |
active-directory Assign Local Admin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/assign-local-admin.md
Device administrators are assigned to all Azure AD joined devices. You cannot sc
- Upto 4 hours have passed for Azure AD to issue a new Primary Refresh Token with the appropriate privileges. - User signs out and signs back in, not lock/unlock, to refresh their profile.
+- Users will not be listed in the local administrator group, the permissions are received through the Primary Refresh Token.
> [!NOTE] > The above actions are not applicable to users who have not signed in to the relevant device previously. In this case, the administrator privileges are applied immediately after their first sign-in to the device.
active-directory Concept Primary Refresh Token https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/concept-primary-refresh-token.md
Once issued, a PRT is valid for 14 days and is continuously renewed as long as t
A PRT is used by two key components in Windows: * **Azure AD CloudAP plugin**: During Windows sign in, the Azure AD CloudAP plugin requests a PRT from Azure AD using the credentials provided by the user. It also caches the PRT to enable cached sign in when the user does not have access to an internet connection.
-* **Azure AD WAM plugin**: When users try to access applications, the Azure AD WAM plugin uses the PRT to enable SSO on Windows 10. Azure AD WAM plugin uses the PRT to request refresh and access tokens for applications that rely on WAM for token requests. It also enables SSO on browsers by injecting the PRT into browser requests. Browser SSO in Windows 10 is supported on Microsoft Edge (natively) and Chrome (via the [Windows 10 Accounts](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji?hl=en) or [Office Online](https://chrome.google.com/webstore/detail/office/ndjpnladcallmjemlbaebfadecfhkepb?hl=en) extensions).
+* **Azure AD WAM plugin**: When users try to access applications, the Azure AD WAM plugin uses the PRT to enable SSO on Windows 10. Azure AD WAM plugin uses the PRT to request refresh and access tokens for applications that rely on WAM for token requests. It also enables SSO on browsers by injecting the PRT into browser requests. Browser SSO in Windows 10 is supported on Microsoft Edge (natively), Chrome (via the [Windows 10 Accounts](https://chrome.google.com/webstore/detail/windows-10-accounts/ppnbnpeolgkicgegkbkbjmhlideopiji?hl=en) or [Office Online](https://chrome.google.com/webstore/detail/office/ndjpnladcallmjemlbaebfadecfhkepb?hl=en) extensions) or Mozilla Firefox v91+ (via [Windows SSO setting](https://support.mozilla.org/en-US/kb/windows-sso))
## How is a PRT renewed?
By securing these keys with the TPM, we enhance the security for PRT from malic
**App tokens**: When an app requests token through WAM, Azure AD issues a refresh token and an access token. However, WAM only returns the access token to the app and secures the refresh token in its cache by encrypting it with the userΓÇÖs data protection application programming interface (DPAPI) key. WAM securely uses the refresh token by signing requests with the session key to issue further access tokens. The DPAPI key is secured by an Azure AD based symmetric key in Azure AD itself. When the device needs to decrypt the user profile with the DPAPI key, Azure AD provides the DPAPI key encrypted by the session key, which CloudAP plugin requests TPM to decrypt. This functionality ensures consistency in securing refresh tokens and avoids applications implementing their own protection mechanisms.
-**Browser cookies**: In Windows 10, Azure AD supports browser SSO in Internet Explorer and Microsoft Edge natively or in Google Chrome via the Windows 10 accounts extension. The security is built not only to protect the cookies but also the endpoints to which the cookies are sent. Browser cookies are protected the same way a PRT is, by utilizing the session key to sign and protect the cookies.
+**Browser cookies**: In Windows 10, Azure AD supports browser SSO in Internet Explorer and Microsoft Edge natively, in Google Chrome via the Windows 10 accounts extension and in Mozilla Firefox v91+ via a browser setting. The security is built not only to protect the cookies but also the endpoints to which the cookies are sent. Browser cookies are protected the same way a PRT is, by utilizing the session key to sign and protect the cookies.
When a user initiates a browser interaction, the browser (or extension) invokes a COM native client host. The native client host ensures that the page is from one of the allowed domains. The browser could send other parameters to the native client host, including a nonce, however the native client host guarantees validation of the hostname. The native client host requests a PRT-cookie from CloudAP plugin, which creates and signs it with the TPM-protected session key. As the PRT-cookie is signed by the session key, it is very difficult to tamper with. This PRT-cookie is included in the request header for Azure AD to validate the device it is originating from. If using the Chrome browser, only the extension explicitly defined in the native client hostΓÇÖs manifest can invoke it preventing arbitrary extensions from making these requests. Once Azure AD validates the PRT cookie, it issues a session cookie to the browser. This session cookie also contains the same session key issued with a PRT. During subsequent requests, the session key is validated effectively binding the cookie to the device and preventing replays from elsewhere.
The following diagrams illustrate the underlying details in issuing, renewing, a
| F | Azure AD validates the Session key signature on the PRT cookie, validates the nonce, verifies that the device is valid in the tenant, and issues an ID token for the web page and an encrypted session cookie for the browser. | > [!NOTE]
-> The Browser SSO flow described in the steps above does not apply for sessions in private modes such as InPrivate in Microsoft Edge, or Incognito in Google Chrome (when using the Microsoft Accounts extension).
+> The Browser SSO flow described in the steps above does not apply for sessions in private modes such as InPrivate in Microsoft Edge, Incognito in Google Chrome (when using the Microsoft Accounts extension) or in private mode in Mozilla Firefox v91+
## Next steps
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/device-management-azure-portal.md
To enable or disable devices, you have two options:
- The toolbar after drilling down into a specific device. > [!IMPORTANT]
-> - You must be a global administrator or cloud device administrator in Azure AD to enable or disable a device.
+> - You must be a global administrator, Intune administrator, or cloud device administrator in Azure AD to enable or disable a device.
> - Disabling a device prevents a device from successfully authenticating with Azure AD, thereby preventing the device from accessing your Azure AD resources that are protected by device-based Conditional Access or using Windows Hello for Business credentials. > - Disabling a device will revoke both the Primary Refresh Token (PRT) and any Refresh Tokens (RT) on the device. > - Printers cannot be enabled or disabled in Azure AD.
active-directory Howto Device Identity Virtual Desktop Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/howto-device-identity-virtual-desktop-infrastructure.md
Before configuring device identities in Azure AD for your VDI environment, famil
| Azure AD registered | Federated/Managed | Windows current/Windows down-level | Persistent/Non-Persistent | Not Applicable | <sup>1</sup> **Windows current** devices represent Windows 10, Windows Server 2016 v1803 or higher, and Windows Server 2019.+ <sup>2</sup> **Windows down-level** devices represent Windows 7, Windows 8.1, Windows Server 2008 R2, Windows Server 2012, and Windows Server 2012 R2. For support information on Windows 7, see [Support for Windows 7 is ending](https://www.microsoft.com/microsoft-365/windows/end-of-windows-7-support). For support information on Windows Server 2008 R2, see [Prepare for Windows Server 2008 end of support](https://www.microsoft.com/cloud-platform/windows-server-2008). <sup>3</sup> A **Federated** identity infrastructure environment represents an environment with an identity provider such as AD FS or other third-party IDP.
active-directory Troubleshoot Hybrid Join Windows Current https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/devices/troubleshoot-hybrid-join-windows-current.md
Possible reasons for failure:
- **DSREG_AUTOJOIN_DISC_WAIT_TIMEOUT** (0x801c001f/-2145648609) - Reason: Operation timed out while performing Discovery. - Resolution: Ensure that `https://enterpriseregistration.windows.net` is accessible in the SYSTEM context. For more information, see the section [Network connectivity requirements](hybrid-azuread-join-managed-domains.md#prerequisites).-- **DSREG_AUTOJOIN_USERREALM_DISCOVERY_FAILED** (0x801c0021/-2145648611)
+- **DSREG_AUTOJOIN_USERREALM_DISCOVERY_FAILED** (0x801c003d/-2145648579)
- Reason: Generic Realm Discovery failure. Failed to determine domain type (managed/federated) from STS. - Resolution: Find the suberror below to investigate further.
active-directory Directory Self Service Signup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/directory-self-service-signup.md
AllowEmailVerifiedUsers and AllowAdHocSubscriptions are tenant-wide settings tha
If the preceding conditions are true, then a member user is created in the home tenant, and a B2B guest user is created in the inviting tenant.
-For more information on Flow and PowerApps trial sign-ups, see the following articles:
+For more information on Flow and Power Apps trial sign-ups, see the following articles:
* [How can I prevent my existing users from starting to use Power BI?](https://support.office.com/article/Power-BI-in-your-Organization-d7941332-8aec-4e5e-87e8-92073ce73dc5#bkmk_preventjoining)
-* [Flow in your organization Q&A](/flow/organization-q-and-a)
+* [Flow in your organization Q&A](/power-automate/organization-q-and-a)
### How do the controls work together? These two parameters can be used in conjunction to define more precise control over self-service sign-up. For example, the following command will allow users to perform self-service sign-up, but only if those users already have an account in Azure AD (in other words, users who would need an email-verified account to be created first cannot perform self-service sign-up):
active-directory Domains Admin Takeover https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/domains-admin-takeover.md
External admin takeover is supported by the following online
The supported service plans include: -- PowerApps Free-- PowerFlow Free
+- Power Apps Free
+- Power Automate Free
- RMS for individuals - Microsoft Stream - Dynamics 365 free trial
active-directory Users Close Account https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/users-close-account.md
Users in an unmanaged organization are often created during self-service sign-up
Before you can close your account, you should confirm the following items:
-* Make sure you are a user of an unmanaged Azure AD organization. You can't close your account if you belong to a managed organization. If you belong to a managed organization and want to close your account, you must contact your administrator. For information about how to determine whether you belong to an unmanaged organization, see [Delete the user from Unmanaged Tenant](/flow/gdpr-dsr-delete#delete-the-user-from-unmanaged-tenant).
+* Make sure you are a user of an unmanaged Azure AD organization. You can't close your account if you belong to a managed organization. If you belong to a managed organization and want to close your account, you must contact your administrator. For information about how to determine whether you belong to an unmanaged organization, see [Delete the user from Unmanaged Tenant](/power-automate/gdpr-dsr-delete#delete-the-user-from-unmanaged-tenant).
* Save any data you want to keep. For information about how to submit an export request, see [Accessing and exporting system-generated logs for Unmanaged Tenants](/power-platform/admin/powerapps-gdpr-dsr-guide-systemlogs#accessing-and-exporting-system-generated-logs-for-unmanaged-tenants).
To close an unmanaged work or school account, follow these steps:
## Next steps - [What is self-service sign-up for Azure Active Directory?](directory-self-service-signup.md)-- [Delete the user from Unmanaged Tenant](/flow/gdpr-dsr-delete#delete-the-user-from-unmanaged-tenant)
+- [Delete the user from Unmanaged Tenant](/power-automate/gdpr-dsr-delete#delete-the-user-from-unmanaged-tenant)
- [Accessing and exporting system-generated logs for Unmanaged Tenants](/power-platform/admin/powerapps-gdpr-dsr-guide-systemlogs#accessing-and-exporting-system-generated-logs-for-unmanaged-tenants)
active-directory Users Search Enhanced https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/users-search-enhanced.md
The following are the displayed user properties on the **All users** page:
- Company name: The company name which the user is associated. - Invitation state: The status of the invitation for a guest user. - Mail: The email of the user.-- Last sign-in: the date the user last signed in. This property is visible only to users with permission to read audit logs (Reporting_ApplicationAuditLogs_Read) ![new user properties displayed on All users and Deleted users pages](./media/users-search-enhanced/user-properties.png)
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/google-federation.md
You can also give Google guest users a direct link to an application or resource
Starting September 30, 2021, Google is [deprecating embedded web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If your apps authenticate users with an embedded web-view and you're using Google federation with [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md) or Azure AD B2B for [external user invitations](google-federation.md) or [self-service sign-up](identity-providers.md), Google Gmail users won't be able to authenticate. The following are known scenarios that will impact Gmail users:-- Microsoft apps (e.g. Teams and PowerApps) on Windows
+- Microsoft apps (e.g. Teams and Power Apps) on Windows
- Windows apps that use the [WebView](/windows/communitytoolkit/controls/wpf-winforms/webview) control, [WebView2](/microsoft-edge/webview2/), or the older WebBrowser control, for authentication. These apps should migrate to using the Web Account Manager (WAM) flow. - Android applications using the WebView UI element - iOS applications using UIWebView/WKWebview
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/users-default-permissions.md
Ability to create Microsoft 365 groups | Setting this option to No prevents user
Restrict access to Azure AD administration portal | <p>Setting this option to No lets non-administrators use the Azure AD administration portal to read and manage Azure AD resources. Yes restricts all non-administrators from accessing any Azure AD data in the administration portal.</p><p>**Note**: this setting does not restrict access to Azure AD data using PowerShell or other clients such as Visual Studio.When set to Yes, to grant a specific non-admin user the ability to use the Azure AD administration portal assign any administrative role such as the Directory Readers role.</p><p>**Note**: this settings will block non-admin users who are owners of groups or applications from using the Azure portal to manage their owned resources.</p><p>This role allows reading basic directory information, which member users have by default (guests and service principals do not).</p> Ability to read other users | This setting is available in PowerShell only. Setting this flag to $false prevents all non-admins from reading user information from the directory. This flag does not prevent reading user information in other Microsoft services like Exchange Online. This setting is meant for special circumstances, and setting this flag to $false is not recommended.
->![NOTE]
+>[!NOTE]
>ItΓÇÖs assumed the average user would only use the portal to access Azure AD, and not use PowerShell or CLI to access their resources. Currently, restricting access to users' default permissions only occurs when the user tries to access the directory within the Azure portal. ## Restrict guest users default permissions
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
Using custom policies, you can now add the Azure AD common endpoint as an identi
**Service category:** My Apps **Product capability:** SSO
-Users can now access applications through internal URLs even when outside your corporate network by using the My Apps Secure Sign-in Extension for Azure AD. This will work with any application that you have published using Azure AD Application Proxy, on any browser that also has the Access Panel browser extension installed. The URL redirection functionality is automatically enabled once a user logs into the extension. The extension is available for download on [Microsoft Edge](https://go.microsoft.com/fwlink/?linkid=845176), [Chrome](https://go.microsoft.com/fwlink/?linkid=866367), and [Firefox](https://go.microsoft.com/fwlink/?linkid=866366).
+Users can now access applications through internal URLs even when outside your corporate network by using the My Apps Secure Sign-in Extension for Azure AD. This will work with any application that you have published using Azure AD Application Proxy, on any browser that also has the Access Panel browser extension installed. The URL redirection functionality is automatically enabled once a user logs into the extension. The extension is available for download on [Microsoft Edge](https://go.microsoft.com/fwlink/?linkid=845176), [Chrome](https://go.microsoft.com/fwlink/?linkid=866367).
active-directory Howto Identity Protection Investigate Risk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/identity-protection/howto-identity-protection-investigate-risk.md
Administrators can then choose to take action on these events. Administrators ca
- Confirm sign-in safe > [!NOTE]
-> Identity Protection evaluates risk for all authentication flows, whether it be interactive or non-interactive. However, the sign-in report shows only the interactive sign-ins. You may see risky sign-ins that occurred on non-interactive sign-ins, but the sign-in will not show up in the Azure AD sign-ins report.
+> Identity Protection evaluates risk for all authentication flows, whether it be interactive or non-interactive. The risky sign-in report now shows both interactive and non-interactive sign-ins. Use the "sign-in type" filter to modify this view.
## Risk detections
active-directory How To Use Vm Sign In https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-sign-in.md
The following script demonstrates how to:
```azurecli az login --identity
- spID=$(az resource list -n <VM-NAME> --query [*].identity.principalId --out tsv)
+ $spID=$(az resource list -n <VM-NAME> --query [*].identity.principalId --out tsv)
echo The managed identity for Azure resources service principal ID is $spID ```
active-directory Pim How To Change Default Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md
Previously updated : 07/27/2021 Last updated : 08/25/2021
You can require that users enter a business justification when they activate. To
## Require approval to activate
-If setting multiple approvers, approval completes as soon as one of them approves or denies. You can't require approval from at least two users. To require approval to activate a role, follow these steps.
+If setting multiple approvers, approval completes as soon as one of them approves or denies. You can't force approval from a second or subsequent approver. To require approval to activate a role, follow these steps.
1. Check the **Require approval to activate** check box.
If setting multiple approvers, approval completes as soon as one of them approve
![Select a user or group pane to select approvers](./media/pim-resource-roles-configure-role-settings/resources-role-settings-select-approvers.png)
-1. Select at least one user and then click **Select**. Select at least one approver. If no specific approvers are selected, privileged role administrators/global administrators will become the default approvers.
+1. Select at least one user and then click **Select**. Select at least one approver. If no specific approvers are selected, Privileged Role Administrators and Global Administrators become the default approvers.
- Your selections will appear in the list of selected approvers.
-
-1. Once you have specified your all your role settings, select **Update** to save your changes.
+1. Select **Update** to save your changes.
## Next steps
active-directory Pim How To Start Security Review https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/privileged-identity-management/pim-how-to-start-security-review.md
This article describes how to create one or more access reviews for privileged A
5. Under Manage, select **Access reviews**, and then select **New** to create a new access review.
- <kbd> ![Azure AD roles - Access reviews list showing the status of all reviews](./media/pim-how-to-start-security-review/access-reviews.png) </kbd>
- 6. Name the access review. Optionally, give the review a description. The name and description are shown to the reviewers. <kbd> ![Create an access review - Review name and description](./media/pim-how-to-start-security-review/name-description.png) </kbd>
active-directory Howto Manage Inactive User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md
This section lists what you need to know about the lastSignInDateTime property.
The **lastSignInDateTime** property is exposed by the [signInActivity resource type](/graph/api/resources/signinactivity?view=graph-rest-beta&preserve-view=true) of the [Microsoft Graph REST API](/graph/overview#whats-in-microsoft-graph).
+> [!NOTE]
+> The signInActivity Graph API endpoint is not yet supported in US Government GCC High environments.
+ ### Is the lastSignInDateTime property available through the Get-AzureAdUser cmdlet? No.
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/roles/permissions-reference.md
This article lists the Azure AD built-in roles you can assign to allow managemen
> | [Partner Tier2 Support](#partner-tier2-support) | Do not use - not intended for general use. | e00e864a-17c5-4a4b-9c06-f5b95a8d5bd8 | > | [Password Administrator](#password-administrator) | Can reset passwords for non-administrators and Password Administrators. | 966707d0-3269-4727-9be2-8c3a10f19b9d | > | [Power BI Administrator](#power-bi-administrator) | Can manage all aspects of the Power BI product. | a9ea8996-122f-4c74-9520-8edcd192826c |
-> | [Power Platform Administrator](#power-platform-administrator) | Can create and manage all aspects of Microsoft Dynamics 365, PowerApps and Microsoft Flow. | 11648597-926c-4cf3-9c36-bcebb0ba8dcc |
+> | [Power Platform Administrator](#power-platform-administrator) | Can create and manage all aspects of Microsoft Dynamics 365, Power Apps and Power Automate. | 11648597-926c-4cf3-9c36-bcebb0ba8dcc |
> | [Printer Administrator](#printer-administrator) | Can manage all aspects of printers and printer connectors. | 644ef478-e28f-4e28-b9dc-3fdde9aa0b1f | > | [Printer Technician](#printer-technician) | Can register and unregister printers and update printer status. | e8cef6f1-e4bd-4ea8-bc07-4b8d950f4477 | > | [Privileged Authentication Administrator](#privileged-authentication-administrator) | Can access to view, set and reset authentication method information for any user (admin or non-admin). | 7be44c8a-adaf-4e2a-84d6-ab2649e08a13 |
Users with this role have global permissions within Microsoft Power BI, when the
## Power Platform Administrator
-Users in this role can create and manage all aspects of environments, PowerApps, Flows, Data Loss Prevention policies. Additionally, users with this role have the ability to manage support tickets and monitor service health.
+Users in this role can create and manage all aspects of environments, Power Apps, Flows, Data Loss Prevention policies. Additionally, users with this role have the ability to manage support tickets and monitor service health.
> [!div class="mx-tableFixed"] > | Actions | Description |
active-directory Ezra Coaching Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/ezra-coaching-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Ezra Coaching | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Ezra Coaching.
++++++++ Last updated : 08/23/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Ezra Coaching
+
+In this tutorial, you'll learn how to integrate Ezra Coaching with Azure Active Directory (Azure AD). When you integrate Ezra Coaching with Azure AD, you can:
+
+* Control in Azure AD who has access to Ezra Coaching.
+* Enable your users to be automatically signed-in to Ezra Coaching with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Ezra Coaching single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Ezra Coaching supports **SP and IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
++
+## Adding Ezra Coaching from the gallery
+
+To configure the integration of Ezra Coaching into Azure AD, you need to add Ezra Coaching from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Ezra Coaching** in the search box.
+1. Select **Ezra Coaching** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Ezra Coaching
+
+Configure and test Azure AD SSO with Ezra Coaching using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Ezra Coaching.
+
+To configure and test Azure AD SSO with Ezra Coaching, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Ezra Coaching SSO](#configure-ezra-coaching-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Ezra Coaching test user](#create-ezra-coaching-test-user)** - to have a counterpart of B.Simon in Ezra Coaching that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Ezra Coaching** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type the URL:
+ `https://app.helloezra.com/`
++
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up Ezra Coaching** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Ezra Coaching.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Ezra Coaching**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Ezra Coaching SSO
+
+To configure single sign-on on **Ezra Coaching** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Ezra Coaching support team](mailto:help@helloezra.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Ezra Coaching test user
+
+In this section, you create a user called Britta Simon in Ezra Coaching. Work with [Ezra Coaching support team](mailto:help@helloezra.com) to add the users in the Ezra Coaching platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+SP initiated:
+
+* Click on Test this application in Azure portal. This will redirect to Ezra Coaching Sign on URL where you can initiate the login flow.
+
+* Go to Ezra Coaching Sign-on URL directly and initiate the login flow from there.
+
+IDP initiated:
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the Ezra Coaching for which you set up the SSO
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Ezra Coaching tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Ezra Coaching for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Ezra Coaching you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
++
active-directory Traction Guest Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/saas-apps/traction-guest-tutorial.md
+
+ Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Traction Guest | Microsoft Docs'
+description: Learn how to configure single sign-on between Azure Active Directory and Traction Guest.
++++++++ Last updated : 08/24/2021++++
+# Tutorial: Azure Active Directory single sign-on (SSO) integration with Traction Guest
+
+In this tutorial, you'll learn how to integrate Traction Guest with Azure Active Directory (Azure AD). When you integrate Traction Guest with Azure AD, you can:
+
+* Control in Azure AD who has access to Traction Guest.
+* Enable your users to be automatically signed-in to Traction Guest with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Traction Guest single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
++
+* Traction Guest supports **IDP** initiated SSO.
+
+* Traction Guest supports **Just In Time** user provisioning.
++
+## Adding Traction Guest from the gallery
+
+To configure the integration of Traction Guest into Azure AD, you need to add Traction Guest from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Traction Guest** in the search box.
+1. Select **Traction Guest** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for Traction Guest
+
+Configure and test Azure AD SSO with Traction Guest using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Traction Guest.
+
+To configure and test Azure AD SSO with Traction Guest, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Traction Guest SSO](#configure-traction-guest-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Traction Guest test user](#create-traction-guest-test-user)** - to have a counterpart of B.Simon in Traction Guest that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Traction Guest** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://<ENVIRONMENT>.tractionguest.com/saml/metadata`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<ENVIRONMENT>.tractionguest.com/sessions/sso/callback`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Traction Guest Client support team](mailto:support@tractionguest.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up Traction Guest** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Traction Guest.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Traction Guest**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Traction Guest SSO
+
+To configure single sign-on on **Traction Guest** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Traction Guest support team](mailto:support@tractionguest.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Traction Guest test user
+
+In this section, a user called Britta Simon is created in Traction Guest. Traction Guest supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Traction Guest, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on Test this application in Azure portal and you should be automatically signed in to the Traction Guest for which you set up the SSO
+
+* You can use Microsoft My Apps. When you click the Traction Guest tile in the My Apps, you should be automatically signed in to the Traction Guest for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
++
+## Next steps
+
+Once you configure Traction Guest you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
++
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/aks/private-clusters.md
As mentioned, virtual network peering is one way to access your private cluster.
* For customers that need to enable Azure Container Registry to work with private AKS, the Container Registry virtual network must be peered with the agent cluster virtual network. * No support for converting existing AKS clusters into private clusters * Deleting or modifying the private endpoint in the customer subnet will cause the cluster to stop functioning.
-* After customers have updated the A record on their own DNS servers, those Pods would still resolve apiserver FQDN to the older IP after migration until they're restarted. Customers need to restart hostNetwork Pods and default-DNSPolicy Pods after control plane migration.
-* In the case of maintenance on the control plane, your [AKS IP](./limit-egress-traffic.md) might change. In this case you must update the A record pointing to the API server private IP on your custom DNS server and restart any custom pods or deployments using hostNetwork.
<!-- LINKS - internal --> [az-provider-register]: /cli/azure/provider#az_provider_register
api-management Api Management Using With Internal Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-using-with-internal-vnet.md
After successful deployment, you should see your API Management service's **priv
| Virtual IP address | Description | | -- | -- | | **Private virtual IP address** | A load balanced IP address from within the API Management-delegated subnet, over which you can access `gateway`, `portal`, `management`, and `scm` endpoints. |
-| **Public virtual IP address** | Used *only* for control plane traffic to `management` endpoint over `port 3443`. Can be locked down to the [ApiManagement][ServiceTags] service tag. |
+| **Public virtual IP address** | Used *mainly* for control plane traffic to `management` endpoint over `port 3443`. Can be locked down to the [ApiManagement][ServiceTags] service tag. In the external VNet configuration, they are also used for runtime API traffic. |
![API Management dashboard with an internal VNET configured][api-management-internal-vnet-dashboard]
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-custom-container.md
Title: Configure a custom container
description: Learn how to configure a custom container in Azure App Service. This article shows the most common configuration tasks. Previously updated : 02/23/2021 Last updated : 08/25/2021 zone_pivot_groups: app-service-containers-windows-linux
This guide provides key concepts and instructions for containerization of Linux
For your custom Windows image, you must choose the right [parent image (base image)](https://docs.docker.com/develop/develop-images/baseimages/) for the framework you want: -- To deploy .NET Framework apps, use a parent image based on the Windows Server Core [Long-Term Servicing Channel (LTSC)](/windows-server/get-started-19/servicing-channels-19#long-term-servicing-channel-ltsc) release. -- To deploy .NET Core apps, use a parent image based on the Windows Server Nano [Semi-Annual Servicing Channel (SAC)](/windows-server/get-started-19/servicing-channels-19#semi-annual-channel) release.
+- To deploy .NET Framework apps, use a parent image based on the Windows Server 2019 Core [Long-Term Servicing Channel (LTSC)](/windows-server/get-started/servicing-channels-comparison#long-term-servicing-channel-ltsc) release.
+- To deploy .NET Core apps, use a parent image based on the Windows Server 2019 Nano [Semi-Annual Servicing Channel (SAC)](/windows-server/get-started/servicing-channels-comparison#semi-annual-channel) release.
It takes some time to download a parent image during app start-up. However, you can reduce start-up time by using one of the following parent images that are already cached in Azure App Service:
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-ssl-certificate.md
When the operation completes, you see the certificate in the **Private Key Certi
> [!IMPORTANT] > To secure a custom domain with this certificate, you still need to create a certificate binding. Follow the steps in [Create binding](configure-ssl-bindings.md#create-binding).
->
+
+> [!NOTE]
+> To renew a [certificate you uploaded](#upload-a-private-certificate), see [Export certificate binding](configure-ssl-bindings.md#renew-certificate-binding). App Service will not automatically sync your newly uploaded certificate with the bindings. The automated certificate syncing feature is only available for [imported Key Vault certificates](#import-a-certificate-from-key-vault) and [imported App Service Certificates](#import-an-app-service-certificate).
## Upload a public certificate
Once the rekey operation is complete, click **Sync**. The sync operation automat
### Renew certificate
-> [!NOTE]
-> To renew a [certificate you uploaded](#upload-a-private-certificate), see [Export certificate binding](configure-ssl-bindings.md#renew-certificate-binding).
- > [!NOTE] > The renewal process requires that [the well-known service principal for App Service has the required permissions on your key vault](deploy-resource-manager-template.md#deploy-web-app-certificate-from-key-vault). This permission is configured for you when you import an App Service Certificate through the portal, and should not be removed from your key vault.
app-service App Service App Service Environment Network Configuration Expressroute https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/environment/app-service-app-service-environment-network-configuration-expressroute.md
Title: Configure Azure ExpressRoute v1
-description: Network configuration for App Service Environment for PowerApps with Azure ExpressRoute. This doc is provided only for customers who use the legacy v1 ASE.
+description: Network configuration for App Service Environment for Power Apps with Azure ExpressRoute. This doc is provided only for customers who use the legacy v1 ASE.
ms.assetid: 34b49178-2595-4d32-9b41-110c96dde6bf
-# Network configuration details for App Service Environment for PowerApps with Azure ExpressRoute
+# Network configuration details for App Service Environment for Power Apps with Azure ExpressRoute
Customers can connect an [Azure ExpressRoute][ExpressRoute] circuit to their virtual network infrastructure to extend their on-premises network to Azure. App Service Environment is created in a subnet of the [virtual network][virtualnetwork] infrastructure. Apps that run on App Service Environment establish secure connections to back-end resources that are accessible only over the ExpressRoute connection.
Now you're ready to deploy App Service Environment!
## Next steps
-To get started with App Service Environment for PowerApps, see [Introduction to App Service Environment][IntroToAppServiceEnvironment].
+To get started with App Service Environment for Power Apps, see [Introduction to App Service Environment][IntroToAppServiceEnvironment].
<!-- LINKS --> [virtualnetwork]: https://azure.microsoft.com/services/virtual-network/
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/reference-app-settings.md
The following environment variables are related to the app environment in genera
| `REMOTEDEBUGGINGVERSION` | Remote debugging version. || | `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` | By default, App Service creates a shared storage for you at app creation. To use a custom storage account instead, set to the connection string of your storage account. For functions, see [App settings reference for Functions](../azure-functions/functions-app-settings.md#website_contentazurefileconnectionstring). | `DefaultEndpointsProtocol=https;AccountName=<name>;AccountKey=<key>` | | `WEBSITE_CONTENTSHARE` | When you use specify a custom storage account with `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING`, App Service creates a file share in that storage account for your app. To use a custom name, set this variable to the name you want. If a file share with the specified name doesn't exist, App Service creates it for you. | `myapp123` |
-| `WEBSITE_AUTH_ENCRYPTION_KEY` | By default, the automatically generated key is used as the encryption key. To override, set to a desired key. This is recommended if you want to share tokens or sessions across multiple apps. ||
-| `WEBSITE_AUTH_SIGNING_KEY` | By default, the automatically generated key is used as the signing key. To override, set to a desired key. This is recommended if you want to share tokens or sessions across multiple apps. ||
| `WEBSITE_SCM_ALWAYS_ON_ENABLED` | Read-only. Shows whether Always On is enabled (`1`) or not (`0`). || | `WEBSITE_SCM_SEPARATE_STATUS` | Read-only. Shows whether the Kudu app is running in a separate process (`1`) or not (`0`). ||
This section shows the configurable runtime settings for each supported language
| `HOME` | Read-only. Directory that points to shared storage (`/home`). | | `DUMP_DIR` | Read-only. Directory for the crash dumps (`/home/logs/dumps`). | | `APP_SVC_RUN_FROM_COPY` | Linux apps only. By default, the app is run from `/home/site/wwwroot`, a shared directory for all scaled-out instances. Set this variable to `true` to copy the app to a local directory in your container and run it from there. When using this option, be sure not to hard-code any reference to `/home/site/wwwroot`. Instead, use a path relative to `/home/site/wwwroot`. |
+| `MACHINEKEY_Decryption` | For Windows native apps or Windows container apps, this variable is injected into app environment or container to enable ASP.NET cryptographic routines (see [machineKey Element](/previous-versions/dotnet/netframework-4.0/w8h3skw9(v=vs.100)). To override the default `decryption` value, configure it as an App Service app setting, or set it directly in the `machineKey` element of the *Web.config* file. |
+| `MACHINEKEY_DecryptionKey` | For Windows native apps or Windows container apps, this variable is injected into the app environment or container to enable ASP.NET cryptographic routines (see [machineKey Element](/previous-versions/dotnet/netframework-4.0/w8h3skw9(v=vs.100)). To override the automatically generated `decryptionKey` value, configure it as an App Service app setting, or set it directly in the `machineKey` element of the *Web.config* file.|
+| `MACHINEKEY_Validation` | For Windows native apps or Windows container apps, this variable is injected into the app environment or container to enable ASP.NET cryptographic routines (see [machineKey Element](/previous-versions/dotnet/netframework-4.0/w8h3skw9(v=vs.100)). To override the default `validation` value, configure it as an App Service app setting, or set it directly in the `machineKey` element of the *Web.config* file.|
+| `MACHINEKEY_ValidationKey` | For Windows native apps or Windows container apps, this variable is injected into the app environment or container to enable ASP.NET cryptographic routines (see [machineKey Element](/previous-versions/dotnet/netframework-4.0/w8h3skw9(v=vs.100)). To override the automatically generated `validationKey` value, configure it as an App Service app setting, or set it directly in the `machineKey` element of the *Web.config* file.|
<!-- | `USE_DOTNET_MONITOR` | if =true then /opt/dotnetcore-tools/dotnet-monitor collect --urls "http://0.0.0.0:50051" --metrics true --metricUrls "http://0.0.0.0:50050" > 2>&1 & --> # [Java](#tab/java)
For more information on custom containers, see [Run a custom container in Azure]
| `DOCKER_REGISTRY_SERVER_USERNAME` | Username to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable is not passed on to the container. || | `DOCKER_REGISTRY_SERVER_PASSWORD` | Password to authenticate with the registry server at `DOCKER_REGISTRY_SERVER_URL`. For security, this variable is not passed on to the container. || | `WEBSITES_WEB_CONTAINER_NAME` | In a Docker Compose app, only one of the containers can be internet accessible. Set to the name of the container defined in the configuration file to override the default container selection. By default, the internet accessible container is the first container to define port 80 or 8080, or, when no such container is found, the first container defined in the configuration file. | |
-| `WEBSITES_PORT` | For a custom container, the custom port number on the container to route requests to. By default, App Service attempts automatic port detection of ports 80 and 8080. ||
+| `WEBSITES_PORT` | For a custom container, the custom port number on the container for App Service to route requests to. By default, App Service attempts automatic port detection of ports 80 and 8080. This setting is *not* injected into the container as an environment variable. ||
| `WEBSITE_CPU_CORES_LIMIT` | By default, a Windows container runs with all available cores for your chosen pricing tier. To reduce the number of cores, set to the number of desired cores limit. For more information, see [Customize the number of compute cores](configure-custom-container.md?pivots=container-windows#customize-the-number-of-compute-cores).|| | `WEBSITE_MEMORY_LIMIT_MB` | By default all Windows Containers deployed in Azure App Service are limited to 1 GB RAM. Set to the desired memory limit in MB. The cumulative total of this setting across apps in the same plan must not exceed the amount allowed by the chosen pricing tier. For more information, see [Customize container memory](configure-custom-container.md?pivots=container-windows#customize-container-memory). ||
-| `MACHINEKEY_Decryption` | For Windows containers, this variable is injected into the container to enable ASP.NET cryptographic routines (see [machineKey Element](/previous-versions/dotnet/netframework-4.0/w8h3skw9(v=vs.100)). To override the default `decryption` value, set it as an app setting. ||
-| `MACHINEKEY_DecryptionKey` | For Windows containers, this variable is injected into the container to enable ASP.NET cryptographic routines (see [machineKey Element](/previous-versions/dotnet/netframework-4.0/w8h3skw9(v=vs.100)). To override the automatically generated `decryptionKey` value, set it as an app setting. ||
-| `MACHINEKEY_Validation` | For Windows containers, this variable is injected into the container to enable ASP.NET cryptographic routines (see [machineKey Element](/previous-versions/dotnet/netframework-4.0/w8h3skw9(v=vs.100)). To override the default `validation` value, set it as an app setting. ||
-| `MACHINEKEY_ValidationKey` | For Windows containers, this variable is injected into the container to enable ASP.NET cryptographic routines (see [machineKey Element](/previous-versions/dotnet/netframework-4.0/w8h3skw9(v=vs.100)). To override the automatically generated `validationKey` value, set it as an app setting. ||
| `CONTAINER_WINRM_ENABLED` | For a Windows container app, set to `1` to enable Windows Remote Management (WIN-RM). || <!--
The following environment variables are related to [App Service authentication](
| `WEBSITE_AUTH_VALIDATE_NONCE`| `true` or `false`. The default value is `true`. This value should never be set to `false` except when temporarily debugging [cryptographic nonce](https://en.wikipedia.org/wiki/Cryptographic_nonce) validation failures that occur during interactive logins. This application setting is intended for use with the V1 (classic) configuration experience. If using the V2 authentication configuration schema, you should instead use the `login.nonce.validateNonce` configuration value. | | `WEBSITE_AUTH_V2_CONFIG_JSON` | This environment variable is populated automatically by the Azure App Service platform and is used to configure the integrated authentication module. The value of this environment variable corresponds to the V2 (non-classic) authentication configuration for the current app in Azure Resource Manager. It's not intended to be configured explicitly. | | `WEBSITE_AUTH_ENABLED` | Read-only. Injected into a Windows or Linux app to indicate whether App Service authentication is enabled. |
+| `WEBSITE_AUTH_ENCRYPTION_KEY` | By default, the automatically generated key is used as the encryption key. To override, set to a desired key. This is recommended if you want to share tokens or sessions across multiple apps. If specified, it supercedes the `MACHINEKEY_DecryptionKey` setting. ||
+| `WEBSITE_AUTH_SIGNING_KEY` | By default, the automatically generated key is used as the signing key. To override, set to a desired key. This is recommended if you want to share tokens or sessions across multiple apps. If specified, it supercedes the `MACHINEKEY_ValidationKey` setting. ||
<!-- System settings WEBSITE_AUTH_RUNTIME_VERSION
application-gateway Configuration Http Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/application-gateway/configuration-http-settings.md
The application gateway routes traffic to the back-end servers by using the conf
## Cookie-based affinity
-Azure Application Gateway uses gateway-managed cookies for maintaining user sessions. When a user sends the first request to Application Gateway, it sets an affinity cookie in the response with a hash value which contains the session details, so that the subsequent requests carrying the affinity cookie will be routed to the same backend server for maintaining stickiness.
+Azure Application Gateway uses gateway-managed cookies for maintaining user sessions. When a user sends the first request to Application Gateway, it sets an affinity cookie in the response with a hash value which contains the session details, so that the subsequent requests carrying the affinity cookie will be routed to the same backend server for maintaining stickiness.
This feature is useful when you want to keep a user session on the same server and when session state is saved locally on the server for a user session. If the application can't handle cookie-based affinity, you can't use this feature. To use it, make sure that the clients support cookies.
+> [!NOTE]
+> Some vulnerability scans may flag the Applicaton Gateway affinity cookie because the Secure or HttpOnly flags are not set. These scans do not take into account that the data in the cookie is generated using a one-way hash. The cookie does not contain any user information and is used purely for routing.
+ The [Chromium browser](https://www.chromium.org/Home) [v80 update](https://chromiumdash.appspot.com/schedule) brought a mandate where HTTP cookies without [SameSite](https://tools.ietf.org/id/draft-ietf-httpbis-rfc6265bis-03.html#rfc.section.5.3.7) attribute has to be treated as SameSite=Lax. In the case of CORS (Cross-Origin Resource Sharing) requests, if the cookie has to be sent in a third-party context, it has to use *SameSite=None; Secure* attributes and it should be sent over HTTPS only. Otherwise, in a HTTP only scenario, the browser doesn't send the cookies in the third-party context. The goal of this update from Chrome is to enhance security and to avoid Cross-Site Request Forgery (CSRF) attacks.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/overview.md
Any Azure service that is used on Arc-enabled Kubernetes, for example Azure Secu
### Azure Arc-enabled data services
-In the current preview phase, Azure Arc-enabled data services are offered at no extra cost.
+For information, see [Azure pricing page](https://azure.microsoft.com/pricing/).
## Next steps
azure-functions Create First Function Cli Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-csharp.md
Title: Create a C# function from the command line - Azure Functions description: Learn how to create a C# function from the command line, then publish the local project to serverless hosting in Azure Functions. Previously updated : 10/03/2020 Last updated : 08/15/2021 adobe-target: true
adobe-target-content: ./create-first-function-cli-csharp-ieux
[!INCLUDE [functions-language-selector-quickstart-cli](../../includes/functions-language-selector-quickstart-cli.md)]
-In this article, you use command-line tools to create a C# class library-based function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
+In this article, you use command-line tools to create a C# function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
+
+This article supports creating both types of compiled C# functions:
+++ [In-process](create-first-function-cli-csharp.md?tabs=in-process) - runs in the same process as the Functions host process. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).++ [Isolated process](create-first-function-cli-csharp.md?tabs=isolated-process) - runs in a separate .NET worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md). Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
There is also a [Visual Studio Code-based version](create-first-function-vs-code
Before you begin, you must have the following:
-+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-
-+ The [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download)
-
-+ The [Azure Functions Core Tools](functions-run-local.md#v2) version 3.x.
-
-+ One of the following tools for creating Azure resources:
- + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later.
-
- + [Azure PowerShell](/powershell/azure/install-az-ps) version 5.0 or later.
++ You also need an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). ### Prerequisite check
Verify your prerequisites, which depend on whether you are using Azure CLI or Az
+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 3.x. ++ Run `dotnet --list-sdks` to check that the required versions are installed.+ + Run `az --version` to check that the Azure CLI version is 2.4 or later. + Run `az login` to sign in to Azure and verify an active subscription.
-+ Run `dotnet --list-sdks` to check that .NET Core SDK version 3.1.x is installed
- # [Azure PowerShell](#tab/azure-powershell) + In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 3.x. ++ Run `dotnet --list-sdks` to check that the required versions are installed.+ + Run `(Get-Module -ListAvailable Az).Version` and verify version 5.0 or later. + Run `Connect-AzAccount` to sign in to Azure and verify an active subscription.
-+ Run `dotnet --list-sdks` to check that .NET Core SDK version 3.1.x is installed
- ## Create a local function project
In Azure Functions, a function project is a container for one or more individual
1. Run the `func init` command, as follows, to create a functions project in a folder named *LocalFunctionProj* with the specified runtime:
+ # [In-process](#tab/in-process)
+ ```csharp func init LocalFunctionProj --dotnet ```
+ # [Isolated process](#tab/isolated-process)
+
+ ```csharp
+ func init LocalFunctionProj --dotnet-isolated
+ ```
+
+ 1. Navigate into the project folder: ```console
If desired, you can skip to [Run the function locally](#run-the-function-locally
#### HttpExample.cs
+The function code generated from the template depends on the type of compiled C# project.
+
+# [In-process](#tab/in-process)
+ *HttpExample.cs* contains a `Run` method that receives request data in the `req` variable is an [HttpRequest](/dotnet/api/microsoft.aspnetcore.http.httprequest) that's decorated with the **HttpTriggerAttribute**, which defines the trigger behavior. :::code language="csharp" source="~/functions-docs-csharp/http-trigger-template/HttpExample.cs":::
-The return object is an [ActionResult](/dotnet/api/microsoft.aspnetcore.mvc.actionresult) that returns a response message as either an [OkObjectResult](/dotnet/api/microsoft.aspnetcore.mvc.okobjectresult) (200) or a [BadRequestObjectResult](/dotnet/api/microsoft.aspnetcore.mvc.badrequestobjectresult) (400). To learn more, see [Azure Functions HTTP triggers and bindings](./functions-bindings-http-webhook.md?tabs=csharp).
+The return object is an [ActionResult](/dotnet/api/microsoft.aspnetcore.mvc.actionresult) that returns a response message as either an [OkObjectResult](/dotnet/api/microsoft.aspnetcore.mvc.okobjectresult) (200) or a [BadRequestObjectResult](/dotnet/api/microsoft.aspnetcore.mvc.badrequestobjectresult) (400).
+
+# [Isolated process](#tab/isolated-process)
+
+*HttpExample.cs* contains a `Run` method that receives request data in the `req` variable is an [HttpRequestData](/dotnet/api/microsoft.azure.functions.worker.http.httprequestdata) object that's decorated with the **HttpTriggerAttribute**, which defines the trigger behavior. Because of the isolated process model, `HttpRequestData` is a representation of the actual `HttpRequest`, and not the request object itself.
++
+The return object is an [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata) object that contains the data that's handed back to the HTTP response.
+++
+To learn more, see [Azure Functions HTTP triggers and bindings](./functions-bindings-http-webhook.md?tabs=csharp).
+
+## Run the function locally
+
+1. Run your function by starting the local Azure Functions runtime host from the *LocalFunctionProj* folder:
+
+ ```
+ func start
+ ```
+
+ Toward the end of the output, the following lines should appear:
+
+ <pre>
+ ...
+
+ Now listening on: http://0.0.0.0:7071
+ Application started. Press Ctrl+C to shut down.
+
+ Http Functions:
+
+ HttpExample: [GET,POST] http://localhost:7071/api/HttpExample
+ ...
+
+ </pre>
+
+ >[!NOTE]
+ > If HttpExample doesn't appear as shown above, you likely started the host from outside the root folder of the project. In that case, use **Ctrl**+**C** to stop the host, navigate to the project's root folder, and run the previous command again.
+
+1. Copy the URL of your `HttpExample` function from this output to a browser:
+
+ # [In-process](#tab/in-process)
+
+ To the function URL, append the query string `?name=<YOUR_NAME>`, making the full URL like `http://localhost:7071/api/HttpExample?name=Functions`. The browser should display a response message that echoes back your query string value. The terminal in which you started your project also shows log output as you make requests.
+
+ # [Isolated process](#tab/isolated-process)
+ Browse to the function URL and you should receive a _Welcome to Azure Functions_ message.
+
+
+
+1. When you're done, use **Ctrl**+**C** and choose `y` to stop the functions host.
[!INCLUDE [functions-create-azure-resources-cli](../../includes/functions-create-azure-resources-cli.md)] 4. Create the function app in Azure:
- # [Azure CLI](#tab/azure-cli)
-
+ # [Azure CLI](#tab/azure-cli/in-process)
+ ```azurecli
- az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location westeurope --runtime dotnet --functions-version 3 --name <APP_NAME> --storage-account <STORAGE_NAME>
+ az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location <REGION> --runtime dotnet --functions-version 3 --name <APP_NAME> --storage-account <STORAGE_NAME>
```
+ The [az functionapp create](/cli/azure/functionapp#az_functionapp_create) command creates the function app in Azure.
+
+ # [Azure CLI](#tab/azure-cli/isolated-process)
+
+ ```azurecli
+ az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location <REGION> --runtime dotnet-isolated --functions-version 3 --name <APP_NAME> --storage-account <STORAGE_NAME>
+ ```
- The [az functionapp create](/cli/azure/functionapp#az_functionapp_create) command creates the function app in Azure.
-
- # [Azure PowerShell](#tab/azure-powershell)
+ The [az functionapp create](/cli/azure/functionapp#az_functionapp_create) command creates the function app in Azure.
+
+ # [Azure PowerShell](#tab/azure-powershell/in-process)
```azurepowershell
- New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime dotnet -FunctionsVersion 3 -Location 'West Europe'
+ New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime dotnet -FunctionsVersion 3 -Location '<REGION>'
```
-
+ The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure.
-
+
+ # [Azure PowerShell](#tab/azure-powershell/isolated-process)
+
+ ```azurepowershell
+ New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime dotnet-isolated -FunctionsVersion 3 -Location '<REGION>'
+ ```
+
+ The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure.
+ In the previous example, replace `<STORAGE_NAME>` with the name of the account you used in the previous step, and replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
The return object is an [ActionResult](/dotnet/api/microsoft.aspnetcore.mvc.acti
[!INCLUDE [functions-publish-project-cli](../../includes/functions-publish-project-cli.md)]
+## Invoke the function on Azure
+
+Because your function uses an HTTP trigger and supports GET requests, you invoke it by making an HTTP request to its URL. It's easiest to do this in a browser.
+
+# [In-process](#tab/in-process)
+
+Copy the complete **Invoke URL** shown in the output of the publish command into a browser address bar, appending the query parameter `?name=Functions`. When you navigate to this URL, the browser should display similar output as when you ran the function locally.
+
+# [Isolated process](#tab/isolated-process)
+
+Copy the complete **Invoke URL** shown in the output of the publish command into a browser address bar. When you navigate to this URL, the browser should display similar output as when you ran the function locally.
++ [!INCLUDE [functions-streaming-logs-cli-qs](../../includes/functions-streaming-logs-cli-qs.md)]
The return object is an [ActionResult](/dotnet/api/microsoft.aspnetcore.mvc.acti
## Next steps
+# [In-process](#tab/in-process)
+
+> [!div class="nextstepaction"]
+> [Connect to Azure Queue Storage](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-csharp&tabs=in-process)
+
+# [Isolated process](#tab/isolated-process)
+ > [!div class="nextstepaction"]
-> [Connect to an Azure Storage queue]
+> [Connect to Azure Queue Storage](functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-csharp&tabs=isolated-process)
-[Connect to an Azure Storage queue]: functions-add-output-binding-storage-queue-cli.md?pivots=programming-language-csharp
+
azure-functions Create First Function Cli Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-node.md
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
# [Azure CLI](#tab/azure-cli) ```azurecli
- az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location westeurope --runtime node --runtime-version 12 --functions-version 3 --name <APP_NAME> --storage-account <STORAGE_NAME>
+ az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location <REGION> --runtime node --runtime-version 12 --functions-version 3 --name <APP_NAME> --storage-account <STORAGE_NAME>
``` The [az functionapp create](/cli/azure/functionapp#az_functionapp_create) command creates the function app in Azure. If you're using Node.js 10, also change `--runtime-version` to `10`.
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
- New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime node -RuntimeVersion 12 -FunctionsVersion 3 -Location 'West Europe'
+ New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime node -RuntimeVersion 12 -FunctionsVersion 3 -Location <REGION>
``` The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. If you're using Node.js 10, change `-RuntimeVersion` to `10`.
azure-functions Create First Function Cli Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-powershell.md
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
# [Azure CLI](#tab/azure-cli) ```azurecli
- az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location westeurope --runtime powershell --functions-version 3 --name <APP_NAME> --storage-account <STORAGE_NAME>
+ az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location <REGION> --runtime powershell --functions-version 3 --name <APP_NAME> --storage-account <STORAGE_NAME>
``` The [az functionapp create](/cli/azure/functionapp#az_functionapp_create) command creates the function app in Azure.
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
- New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime PowerShell -FunctionsVersion 3 -Location 'West Europe'
+ New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime PowerShell -FunctionsVersion 3 -Location '<REGION>'
``` The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure.
azure-functions Create First Function Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-python.md
Use the following commands to create these items. Both Azure CLI and PowerShell
-1. Create a resource group named `AzureFunctionsQuickstart-rg` in the `westeurope` region.
+1. Create a resource group named `AzureFunctionsQuickstart-rg` in your chosen region:
# [Azure CLI](#tab/azure-cli) ```azurecli
- az group create --name AzureFunctionsQuickstart-rg --location westeurope
+ az group create --name AzureFunctionsQuickstart-rg --location <REGION>
```
- The [az group create](/cli/azure/group#az_group_create) command creates a resource group. You generally create your resource group and resources in a region near you, using an available region returned from the `az account list-locations` command.
+ The [az group create](/cli/azure/group#az_group_create) command creates a resource group. In the above command, replace `<REGION>` with a region near you, using an available region code returned from the [az account list-locations](/cli/azure/account#az_account_list_locations) command.
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
- New-AzResourceGroup -Name AzureFunctionsQuickstart-rg -Location westeurope
+ New-AzResourceGroup -Name AzureFunctionsQuickstart-rg -Location '<REGION>'
``` The [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command creates a resource group. You generally create your resource group and resources in a region near you, using an available region returned from the [Get-AzLocation](/powershell/module/az.resources/get-azlocation) cmdlet.
Use the following commands to create these items. Both Azure CLI and PowerShell
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
- New-AzStorageAccount -ResourceGroupName AzureFunctionsQuickstart-rg -Name <STORAGE_NAME> -SkuName Standard_LRS -Location westeurope
+ New-AzStorageAccount -ResourceGroupName AzureFunctionsQuickstart-rg -Name <STORAGE_NAME> -SkuName Standard_LRS -Location <REGION>
``` The [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) cmdlet creates the storage account.
Use the following commands to create these items. Both Azure CLI and PowerShell
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
- New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -FunctionsVersion 3 -RuntimeVersion 3.8 -Runtime python -Location 'West Europe'
+ New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -FunctionsVersion 3 -RuntimeVersion 3.8 -Runtime python -Location '<REGION>'
``` The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. If you're using Python 3.7 or 3.6, change `-RuntimeVersion` to `3.7` or `3.6`, respectively.
azure-functions Create First Function Cli Typescript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-cli-typescript.md
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
# [Azure CLI](#tab/azure-cli) ```azurecli
- az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location westeurope --runtime node --runtime-version 12 --functions-version 3 --name <APP_NAME> --storage-account <STORAGE_NAME>
+ az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location <REGION> --runtime node --runtime-version 12 --functions-version 3 --name <APP_NAME> --storage-account <STORAGE_NAME>
``` The [az functionapp create](/cli/azure/functionapp#az_functionapp_create) command creates the function app in Azure. If you're using Node.js 10, also change `--runtime-version` to `10`.
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
- New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime node -RuntimeVersion 12 -FunctionsVersion 3 -Location 'West Europe'
+ New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime node -RuntimeVersion 12 -FunctionsVersion 3 -Location '<REGION>'
``` The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. If you're using Node.js 10, change `-RuntimeVersion` to `10`.
azure-functions Create First Function Vs Code Csharp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-csharp.md
Title: Create a C# function using Visual Studio Code - Azure Functions description: Learn how to create a C# function, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 11/03/2020 Last updated : 08/15/2021 adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021
adobe-target-content: ./create-first-function-vs-code-csharp-ieux
[!INCLUDE [functions-language-selector-quickstart-vs-code](../../includes/functions-language-selector-quickstart-vs-code.md)]
-In this article, you use Visual Studio Code to create a C# class library-based function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
+In this article, you use Visual Studio Code to create a C# function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
+
+This article supports creating both types of compiled C# functions:
+++ [In-process](create-first-function-vs-code-csharp.md?tabs=in-process) - runs in the same process as the Functions host process. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).++ [Isolated process](create-first-function-vs-code-csharp.md?tabs=isolated-process) - runs in a separate .NET worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md). Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
There's also a [CLI-based version](create-first-function-cli-csharp.md) of this
Before you get started, make sure you have the following requirements in place:
-+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
+# [In-process](#tab/in-process)
+++ [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 3.x.+++ [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).+++ [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code.
-+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 3.x.
++ [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.+
+# [Isolated process](#tab/isolated-process)
+++ [.NET 5.0 SDK](https://dotnet.microsoft.com/download)+++ [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 3.x. + [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
-+ The [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code.
++ [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code. +++ [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.++
-+ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code.
++ You also need an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). ## <a name="create-an-azure-functions-project"></a>Create your local project
In this section, you use Visual Studio Code to create a local Azure Functions pr
1. Provide the following information at the prompts:
- + **Select a language for your function project**: Choose `C#`.
-
- + **Select a template for your project's first function**: Choose `HTTP trigger`.
-
- + **Provide a function name**: Type `HttpExample`.
-
- + **Provide a namespace**: Type `My.Functions`.
-
- + **Authorization level**: Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).
-
- + **Select how you would like to open your project**: Choose `Add to workspace`.
-
+ # [In-process](#tab/in-process)
+
+ |Prompt|Selection|
+ |--|--|
+ |**Select a language for your function project**|Choose `C#`.|
+ | **Select a .NET runtime** | Choose `.NET Core 3.1 LTS`.|
+ |**Select a template for your project's first function**|Choose `HTTP trigger`.|
+ |**Provide a function name**|Type `HttpExample`.|
+ |**Provide a namespace** | Type `My.Functions`. |
+ |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|
+ |**Select how you would like to open your project**|Choose `Add to workspace`.|
+
+ # [Isolated process](#tab/isolated-process)
+
+ |Prompt|Selection|
+ |--|--|
+ |**Select a language for your function project**|Choose `C#`.|
+ | **Select a .NET runtime** | Choose `.NET 5.0 Isolated`.|
+ |**Select a template for your project's first function**|Choose `HTTP trigger`.|
+ |**Provide a function name**|Type `HttpExample`.|
+ |**Provide a namespace** | Type `My.Functions`. |
+ |**Authorization level**|Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).|
+ |**Select how you would like to open your project**|Choose `Add to workspace`.|
+
+
+
1. Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md#generated-project-files). After you've verified that the function runs correctly on your local computer, it's time to use Visual Studio Code to publish the project directly to Azure.
After you've verified that the function runs correctly on your local computer, i
## Next steps
-You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=csharp) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to either Azure Cosmos DB or Azure Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
+You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=csharp) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to either Azure Cosmos DB or Azure Queue Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp).
+
+# [In-process](#tab/in-process)
> [!div class="nextstepaction"]
-> [Connect to a database](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-csharp)
+> [Connect to Azure Cosmos DB](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-csharp&tabs=in-process)
+> [Connect to Azure Queue Storage](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-csharp&tabs=in-process)
+
+# [Isolated process](#tab/isolated-process)
+ > [!div class="nextstepaction"]
-> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-csharp)
+> [Connect to Azure Cosmos DB](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-csharp&tabs=isolated-process)
+> [Connect to Azure Queue Storage](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-csharp&tabs=isolated-process)
++ [Azure Functions Core Tools]: functions-run-local.md [Azure Functions extension for Visual Studio Code]: https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions
azure-functions Create First Function Vs Code Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-java.md
adobe-target-content: ./create-first-function-vs-code-java-uiex
# Quickstart: Create a Java function in Azure using Visual Studio Code
-> [!div class="op_single_selector" title1="Select your function language: "]
-> - [Java](create-first-function-vs-code-java.md)
-> - [Python](create-first-function-vs-code-python.md)
-> - [C#](create-first-function-vs-code-csharp.md)
-> - [JavaScript](create-first-function-vs-code-node.md)
-> - [PowerShell](create-first-function-vs-code-powershell.md)
-> - [TypeScript](create-first-function-vs-code-typescript.md)
-> - [Other (Go/Rust)](create-first-function-vs-code-other.md)
In this article, you use Visual Studio Code to create a Java function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
azure-functions Create First Function Vs Code Node https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/create-first-function-vs-code-node.md
Use the table below to resolve the most common issues encountered when using thi
## Next steps
-You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=javascript) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to either Azure Cosmos DB or Azure Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=javascript).
+You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=javascript) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to either Azure Cosmos DB or Azure Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=javascript). If you want to learn more about security, see [Securing Azure Functions](security-concepts.md).
> [!div class="nextstepaction"]
-> [Connect to a database](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-javascript)
-> [!div class="nextstepaction"]
-> [Connect to an Azure Storage queue](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-javascript)
-> [Securing your Function](security-concepts.md)
+> [Connect to Azure Cosmos DB](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-javascript)
+> [Connect to Azure Queue Storage](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-javascript)
[Azure Functions Core Tools]: functions-run-local.md [Azure Functions extension for Visual Studio Code]: https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions
azure-functions Dotnet Isolated Process Developer Howtos https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-developer-howtos.md
- Title: Develop and publish .NET 5 functions using Azure Functions
-description: Learn how to create and debug C# functions using .NET 5.0, then deploy the local project to serverless hosting in Azure Functions.
Previously updated : 05/03/2021-
-recommendations: false
-#Customer intent: As a developer, I need to know how to create functions that run in an isolated process so that I can run my function code on current (not LTS) releases of .NET.
-zone_pivot_groups: development-environment-functions
--
-# Develop and publish .NET 5 functions using Azure Functions
-
-This article shows you how to work with C# functions using .NET 5.0, which run out-of-process from the Azure Functions runtime. You'll learn how to create, debug locally, and publish these .NET isolated process functions to Azure. In Azure, these functions run in an isolated process that supports .NET 5.0. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).
-
-If you don't need to support .NET 5.0 or run your functions out-of-process, you might want to instead [create a C# class library function](functions-create-your-first-function-visual-studio.md).
-
-## Prerequisites
-
-+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
-
-+ [.NET 5.0 SDK](https://dotnet.microsoft.com/download)
-
-+ [Azure Functions Core Tools](functions-run-local.md#v2) version 3.0.3381, or a later version.
-
-+ [Azure CLI](/cli/azure/install-azure-cli) version 2.20, or a later version.
-+ [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms).
-
-+ The [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code.
-
-+ The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code, version 1.3.0 or newer.
-+ [Visual Studio 2019](https://azure.microsoft.com/downloads/) version 16.10 or later. Your install must include either the **Azure development** or the **ASP.NET and web development** workload.
-
-## Create a local function project
-
-In Azure Functions, a function project is a container for one or more individual functions that each responds to a specific trigger. All functions in a project share the same local and hosting configurations. In this section, you create a function project that contains a single function.
-
-1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, select the **Create new project...** icon.
-
- ![Choose Create a new project](./media/functions-create-first-function-vs-code/create-new-project.png)
-
-1. Choose a directory location for your project workspace and choose **Select**.
-
- > [!NOTE]
- > These steps were designed to be completed outside of a workspace. In this case, do not select a project folder that is part of a workspace.
-
-1. Provide the following information at the prompts:
-
- + **Select a language for your function project**: Choose `C#`.
-
- + **Select a .NET runtime**: Choose `.NET 5 isolated`.
-
- + **Select a template for your project's first function**: Choose `HTTP trigger`.
-
- + **Provide a function name**: Type `HttpExample`.
-
- + **Provide a namespace**: Type `My.Functions`.
-
- + **Authorization level**: Choose `Anonymous`, which enables anyone to call your function endpoint. To learn about authorization level, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).
-
- + **Select how you would like to open your project**: Choose `Add to workspace`.
-
-1. Using this information, Visual Studio Code generates an Azure Functions project with an HTTP trigger. You can view the local project files in the Explorer. To learn more about files that are created, see [Generated project files](functions-develop-vs-code.md#generated-project-files).
-
-1. Run the `func init` command, as follows, to create a functions project in a folder named *LocalFunctionProj*:
-
- ```console
- func init LocalFunctionProj --worker-runtime dotnetisolated
- ```
-
- Specifying `dotnetisolated` creates a project that runs on .NET 5.0.
--
-1. Navigate into the project folder:
-
- ```console
- cd LocalFunctionProj
- ```
-
- This folder contains various files for the project, including the [local.settings.json](functions-develop-local.md#local-settings-file) and [host.json](functions-host-json.md) configurations files. Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file.
-
-1. Add a function to your project by using the following command, where the `--name` argument is the unique name of your function (HttpExample) and the `--template` argument specifies the function's trigger (HTTP).
-
- ```console
- func new --name HttpExample --template "HTTP trigger" --authlevel "anonymous"
- ```
-
- `func new` creates an HttpExample.cs code file.
--
-1. From the Visual Studio menu, select **File** > **New** > **Project**.
-
-1. In **Create a new project**, enter *functions* in the search box, choose the **Azure Functions** template, and then select **Next**.
-
-1. In **Configure your new project**, enter a **Project name** for your project, and then select **Create**. The function app name must be valid as a C# namespace, so don't use underscores, hyphens, or any other nonalphanumeric characters.
-
-1. For the **Create a new Azure Functions application** settings, use the values in the following table:
-
- | Setting | Value | Description |
- | | - |-- |
- | **.NET version** | **.NET 5 (Isolated)** | This value creates a function project that runs on .NET 5.0 in an isolated process. |
- | **Function template** | **HTTP trigger** | This value creates a function triggered by an HTTP request. |
- | **Storage account (AzureWebJobsStorage)** | **Storage emulator** | Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. An HTTP trigger doesn't use an Azure Storage account connection string; all other trigger types require a valid Azure Storage account connection string. |
- | **Authorization level** | **Anonymous** | The created function can be triggered by any client without providing a key. This authorization setting makes it easy to test your new function. For more information about keys and authorization, see [Authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys) and [HTTP and webhook bindings](functions-bindings-http-webhook.md). |
-
-
- ![Azure Functions project settings](./media/dotnet-isolated-process-developer-howtos/functions-project-settings.png)
-
- Make sure you set the **Authorization level** to **Anonymous**. If you choose the default level of **Function**, you're required to present the [function key](functions-bindings-http-webhook-trigger.md#authorization-keys) in requests to access your function endpoint.
-
-1. Select **Create** to create the function project and HTTP trigger function.
-
-Visual Studio creates a project and class that contains boilerplate code for the HTTP trigger function type. The boilerplate code sends a "Welcome to Azure Functions!" HTTP response. The `HttpTrigger` attribute specifies that the function is triggered by an HTTP request.
-
-## Rename the function
-
-The `FunctionName` method attribute sets the name of the function, which by default is generated as `Function1`. Since the tooling doesn't let you override the default function name when you create your project, take a minute to create a better name for the function class, file, and metadata.
-
-1. In **File Explorer**, right-click the Function1.cs file and rename it to `HttpExample.cs`.
-
-1. In the code, rename the Function1 class to `HttpExample`.
-
-1. In the `HttpTrigger` method named `Run`, rename the `FunctionName` method attribute to `HttpExample` and the value passed to the `GetLogger` method.
-
-Your function definition should now look like the following code:
-
-
-Now that you've renamed the function, you can test it on your local computer.
-
-## Run the function locally
-
-Visual Studio integrates with Azure Functions Core Tools so that you can test your functions locally using the full Azure Functions runtime.
-
-1. To run your function, press <kbd>F5</kbd> in Visual Studio. You might need to enable a firewall exception so that the tools can handle HTTP requests. Authorization levels are never enforced when you run a function locally.
-
-1. Copy the URL of your function from the Azure Functions runtime output and run the request. A welcome to Functions message is displayed when the function runs successfully and logs are written to the runtime output.
-
-1. To stop debugging, press <kbd>Shift</kbd>+<kbd>F5</kbd> in Visual Studio.
-
-After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure.
------
-## Create supporting Azure resources for your function
-
-Before you can deploy your function code to Azure, you need to create three resources:
--- A [resource group](../azure-resource-manager/management/overview.md), which is a logical container for related resources.-- A [Storage account](../storage/common/storage-account-create.md), which is used to maintain state and other information about your functions.-- A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment, and sharing of resources.-
-Use the following commands to create these items.
-
-1. If you haven't done so already, sign in to Azure:
-
- ```azurecli
- az login
- ```
-
- The [az login](/cli/azure/reference-index#az_login) command signs you into your Azure account.
-
-1. Create a resource group named `AzureFunctionsQuickstart-rg` in the `westeurope` region:
-
- ```azurecli
- az group create --name AzureFunctionsQuickstart-rg --location westeurope
- ```
-
- The [az group create](/cli/azure/group#az_group_create) command creates a resource group. You generally create your resource group and resources in a region near you, using an available region returned from the `az account list-locations` command.
-
-1. Create a general-purpose storage account in your resource group and region:
-
- ```azurecli
- az storage account create --name <STORAGE_NAME> --location westeurope --resource-group AzureFunctionsQuickstart-rg --sku Standard_LRS
- ```
-
- The [az storage account create](/cli/azure/storage/account#az_storage_account_create) command creates the storage account.
-
- In the previous example, replace `<STORAGE_NAME>` with a name that is appropriate to you and unique in Azure Storage. Names must contain three to 24 characters numbers and lowercase letters only. `Standard_LRS` specifies a general-purpose account, which is [supported by Functions](storage-considerations.md#storage-account-requirements).
-
-1. Create the function app in Azure:
-
- ```azurecli
- az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location westeurope --runtime dotnet-isolated --runtime-version 5.0 --functions-version 3 --name <APP_NAME> --storage-account <STORAGE_NAME>
- ```
-
- The [az functionapp create](/cli/azure/functionapp#az_functionapp_create) command creates the function app in Azure.
-
- In the previous example, replace `<STORAGE_NAME>` with the name of the account you used in the previous step, and replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
-
- This command creates a function app running .NET 5.0 under the [Azure Functions Consumption Plan](consumption-plan.md). This plan should be free for the amount of usage you incur in this article. The command also provisions an associated Azure Application Insights instance in the same resource group. Use this instance to monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
----
-## Publish the project to Azure
-
-Before you can publish your project, you must have a function app in your Azure subscription. Visual Studio publishing creates a function app for you the first time you publish your project.
--
-## Verify your function in Azure
-
-1. In Cloud Explorer, your new function app should be selected. If not, expand your subscription > **App Services**, and select your new function app.
-
-1. Right-click the function app and choose **Open in Browser**. This opens the root of your function app in your default web browser and displays the page that indicates your function app is running.
-
- :::image type="content" source="media/functions-create-your-first-function-visual-studio/function-app-running-azure.png" alt-text="Function app running":::
-
-1. In the address bar in the browser, append the path `/api/HttpExample` to the base URL and run the request.
-
-1. Go to this URL and you see the same response in the browser you had when running locally.
----
-## Publish the project to Azure
-
-In this section, you create a function app and related resources in your Azure subscription and then deploy your code.
-
-> [!IMPORTANT]
-> Publishing to an existing function app overwrites the content of that app in Azure.
--
-1. Choose the Azure icon in the Activity bar, then in the **Azure: Functions** area, choose the **Deploy to function app...** button.
-
- ![Publish your project to Azure](../../includes/media/functions-publish-project-vscode/function-app-publish-project.png)
-
-1. Provide the following information at the prompts:
-
- - **Select folder**: Choose a folder from your workspace or browse to one that contains your function app. You won't see this prompt when you already have a valid function app opened.
-
- - **Select subscription**: Choose the subscription to use. You won't see this prompt when you only have one subscription.
-
- - **Select Function App in Azure**: Choose `- Create new Function App`. (Don't choose the `Advanced` option, which isn't covered in this article.)
-
- - **Enter a globally unique name for the function app**: Type a name that is valid in a URL path. The name you type is validated to make sure that it's unique in Azure Functions.
-
- - **Select a runtime stack**: Choose `.NET 5 (non-LTS)`.
-
- - **Select a location for new resources**: For better performance, choose a [region](https://azure.microsoft.com/regions/) near you.
-
- In the notification area, you see the status of individual resources as they're created in Azure.
-
- :::image type="content" source="../../includes/media/functions-publish-project-vscode/resource-notification.png" alt-text="Notification of Azure resource creation":::
-
-1. When completed, the following Azure resources are created in your subscription, using names based on your function app name:
-
- [!INCLUDE [functions-vs-code-created-resources](../../includes/functions-vs-code-created-resources.md)]
-
- A notification is displayed after your function app is created and the deployment package is applied.
-
- [!INCLUDE [functions-vs-code-create-tip](../../includes/functions-vs-code-create-tip.md)]
-
-4. Select **View Output** in this notification to view the creation and deployment results, including the Azure resources that you created. If you miss the notification, select the bell icon in the lower right corner to see it again.
-
- ![Create complete notification](../../includes/media/functions-publish-project-vscode/function-create-notifications.png)
----
-## Clean up resources
-
-You created resources to complete this article. You may be billed for these resources, depending on your [account status](https://azure.microsoft.com/account/) and [service pricing](https://azure.microsoft.com/pricing/).
-
-Use the following command to delete the resource group and all its contained resources to avoid incurring further costs.
-
-```azurecli
-az group delete --name AzureFunctionsQuickstart-rg
-```
-
-Use the following steps to delete the function app and its related resources to avoid incurring any further costs.
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Learn more about .NET isolated functions](dotnet-isolated-process-guide.md)
-
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/dotnet-isolated-process-guide.md
This article is an introduction to using C# to develop .NET isolated process fun
| Getting started | Concepts| Samples | |--|--|--|
-| <ul><li>[Using Visual Studio Code](dotnet-isolated-process-developer-howtos.md?pivots=development-environment-vscode)</li><li>[Using command line tools](dotnet-isolated-process-developer-howtos.md?pivots=development-environment-cli)</li><li>[Using Visual Studio](dotnet-isolated-process-developer-howtos.md?pivots=development-environment-vs)</li></ul> | <ul><li>[Hosting options](functions-scale.md)</li><li>[Monitoring](functions-monitoring.md)</li> | <ul><li>[Reference samples](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples)</li></ul> |
+| <ul><li>[Using Visual Studio Code](create-first-function-vs-code-csharp.md?tabs=isolated-process)</li><li>[Using command line tools](create-first-function-cli-csharp.md?tabs=isolated-process)</li><li>[Using Visual Studio](functions-create-your-first-function-visual-studio.md?tabs=isolated-process)</li></ul> | <ul><li>[Hosting options](functions-scale.md)</li><li>[Monitoring](functions-monitoring.md)</li> | <ul><li>[Reference samples](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples)</li></ul> |
If you don't need to support .NET 5.0 or run your functions out-of-process, you might want to instead [develop C# class library functions](functions-dotnet-class-library.md).
azure-functions Functions Add Output Binding Cosmos Db Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md
Before you get started, make sure to install the [Azure Databases extension](htt
> [!IMPORTANT] > [Azure Cosmos DB serverless](../cosmos-db/serverless.md) is now generally available. This consumption-based mode makes Azure Cosmos DB a strong option for serverless workloads. To use Azure Cosmos DB in serverless mode, choose **Serverless** as the **Capacity mode** when creating your account.
-1. In Visual Studio Code, right-click on the Azure subscription where you created your Function App in the [previous article](./create-first-function-vs-code-csharp.md) and select **Create Server...**
+1. In Visual Studio Code, choose the Azure icon in the Activity bar.
+
+1. In the **Azure: Databases** area, right-click (Ctrl+click on macOS) on the Azure subscription where you created your function app in the [previous article](./create-first-function-vs-code-csharp.md), and select **Create Server...**
:::image type="content" source="./media/functions-add-output-binding-cosmos-db-vs-code/create-account.png" alt-text="Creating a new Azure Cosmos DB account from Visual Studio code" border="true"::: 1. Provide the following information at the prompts:
- + **Select an Azure Database Server**: Choose `Core (SQL)` to create a document database that you can query by using a SQL syntax. [Learn more about the Azure Cosmos DB SQL API](../cosmos-db/introduction.md).
-
- + **Account name**: Enter a unique name to identify your Azure Cosmos DB account. The account name can use only lowercase letters, numbers, and hyphens (-), and must be between 3 and 31 characters long.
-
- + **Select a capacity model**: Select **Serverless** to create an account in [serverless](../cosmos-db/serverless.md) mode. Select **Provisioned throughput** to create an account in [provisioned throughput](../cosmos-db/set-throughput.md) mode. It is advised to choose **Serverless** if you're getting started with Azure Cosmos DB.
+ |Prompt| Selection|
+ |--|--|
+ |**Select an Azure Database Server**| Choose `Core (SQL)` to create a document database that you can query by using a SQL syntax. [Learn more about the Azure Cosmos DB SQL API](../cosmos-db/introduction.md). |
+ |**Account name**| Enter a unique name to identify your Azure Cosmos DB account. The account name can use only lowercase letters, numbers, and hyphens (-), and must be between 3 and 31 characters long.|
+ |**Select a capacity model**| Select **Serverless** to create an account in [serverless](../cosmos-db/serverless.md) mode.
+ |**Select a resource group for new resources**| Choose the resource group where you created your function app in the [previous article](./create-first-function-vs-code-csharp.md). |
+ |**Select a location for new resources**| Select a geographic location to host your Azure Cosmos DB account. Use the location that's closest to you or your users to get the fastest access to your data. |
- + **Select a resource group for new resources**: Choose the resource group where you created your Function App in the [previous article](./create-first-function-vs-code-csharp.md).
-
- + **Select a location for new resources**: Select a geographic location to host your Azure Cosmos DB account. Use the location that's closest to you or your users to get the fastest access to your data.
+ After your new account is provisioned, a message is displayed in notification area.
## Create an Azure Cosmos DB database and container
-1. Once your new Azure Cosmos DB account has been created, right-click on its name, and select **Create Database...**.
-
- :::image type="content" source="./media/functions-add-output-binding-cosmos-db-vs-code/create-database.png" alt-text="Creating a new Azure Cosmos DB database from Visual Studio code" border="true":::
-
-1. When prompted, enter `my-database` as the **Database Name**.
-
-1. After the database is created, right-click on its name and select **Create Collection...**
-
- :::image type="content" source="./media/functions-add-output-binding-cosmos-db-vs-code/create-container.png" alt-text="Creating a new Azure Cosmos DB container from Visual Studio code" border="true":::
+1. Right-click your account and select **Create database...**.
1. Provide the following information at the prompts:
- + **Enter an id for your Collection**: `my-container`
+ |Prompt| Selection|
+ |--|--|
+ |**Database name** | Type `my-database`.|
+ |**Enter and ID for your collection**| Type `my-container`. |
+ |**Enter the partition key for the collection**|Type `/id` as the [partition key](../cosmos-db/partitioning-overview.md).|
- + **Enter the [partition key](../cosmos-db/partitioning-overview.md) for the collection**: `id`
+1. Select **OK** to create the container and database.
## Update your function app settings
-In the [previous quickstart article](./create-first-function-vs-code-csharp.md), you created a function app in Azure. In this article, you update your Function App to write JSON documents in the Azure Cosmos DB container you've created above. To connect to your Azure Cosmos DB account, you must add its connection string to your app settings. You then download the new setting to your local.settings.json file so you can connect to your Azure Cosmos DB account when running locally.
+In the [previous quickstart article](./create-first-function-vs-code-csharp.md), you created a function app in Azure. In this article, you update your app to write JSON documents to the Azure Cosmos DB container you've just created. To connect to your Azure Cosmos DB account, you must add its connection string to your app settings. You then download the new setting to your local.settings.json file so you can connect to your Azure Cosmos DB account when running locally.
-1. In Visual Studio Code, right-click on your Azure Cosmos DB account, and select **Copy Connection String**.
+1. In Visual Studio Code, right-click (Ctrl+click on macOS) on your new Azure Cosmos DB account, and select **Copy Connection String**.
:::image type="content" source="./media/functions-add-output-binding-cosmos-db-vs-code/copy-connection-string.png" alt-text="Copying the Azure Cosmos DB connection string" border="true":::
In the [previous quickstart article](./create-first-function-vs-code-csharp.md),
1. Choose the function app you created in the previous article. Provide the following information at the prompts:
- + **Enter new app setting name**: Type `CosmosDbConnectionString`.
+ |Prompt| Selection|
+ |--|--|
+ |**Enter new app setting name**| Type `CosmosDbConnectionString`.|
+ |**Enter value for "CosmosDbConnectionString"**| Paste the connection string of your Azure Cosmos DB account you just copied.|
- + **Enter value for "CosmosDbConnectionString"**: Paste the connection string of your Azure Cosmos DB account, as copied earlier.
+ This creates a application setting named connection `CosmosDbConnectionString` in your function app in Azure. Now, you can download this setting to your local.settings.json file.
1. Press <kbd>F1</kbd> again to open the command palette, then search for and run the command `Azure Functions: Download Remote Settings...`. 1. Choose the function app you created in the previous article. Select **Yes to all** to overwrite the existing local settings.
+This downloads all of the setting from Azure to your local project, including the new connection string setting. Most of the downloaded settings aren't used when running locally.
+ ## Register binding extensions Because you're using an Azure Cosmos DB output binding, you must have the corresponding bindings extension installed before you run the project. ::: zone pivot="programming-language-csharp"
-With the exception of HTTP and timer triggers, bindings are implemented as extension packages. Run the following [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to add the Storage extension package to your project.
+With the exception of HTTP and timer triggers, bindings are implemented as extension packages. Run the following [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to add the Azure Cosmos DB extension package to your project.
+# [In-process](#tab/in-process)
```bash
-dotnet add package Microsoft.Azure.WebJobs.Extensions.CosmosDB
+dotnet add package Microsoft.Azure.WebJobs.Extensions.CosmosDB
```-
+# [Isolated process](#tab/isolated-process)
+```bash
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.CosmosDB
+```
+
+Now, you can add the storage output binding to your project.
::: zone-end ::: zone pivot="programming-language-javascript"
In Functions, each type of binding requires a `direction`, `type`, and a unique
In a C# class library project, the bindings are defined as binding attributes on the function method. The *function.json* file required by Functions is then auto-generated based on these attributes. ++
+# [In-process](#tab/in-process)
Open the *HttpExample.cs* project file and add the following parameter to the `Run` method definition:
-```csharp
-[CosmosDB(
- databaseName: "my-database",
- collectionName: "my-container",
- ConnectionStringSetting = "CosmosDbConnectionString")]IAsyncCollector<dynamic> documentsOut,
-```
-The `documentsOut` parameter is an `IAsyncCollector<T>` type, which represents a collection of JSON documents that will be written to your Azure Cosmos DB container when the function completes. Specific attributes specifies the name of the container and the name of its parent database. The connection string for your Azure Cosmos DB account is set by the `ConnectionStringSettingAttribute`.
+The `documentsOut` parameter is an `IAsyncCollector<T>` type, which represents a collection of JSON documents that are written to your Azure Cosmos DB container when the function completes. Specific attributes indicate the names of the container and its parent database. The connection string for your Azure Cosmos DB account is set by the `ConnectionStringSettingAttribute`.
-The Run method definition should now look like the following:
+# [Isolated process](#tab/isolated-process)
-```csharp
-[FunctionName("HttpExample")]
-public static async Task<IActionResult> Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
- [CosmosDB(
- databaseName: "my-database",
- collectionName: "my-container",
- ConnectionStringSetting = "CosmosDbConnectionString")]IAsyncCollector<dynamic> documentsOut,
- ILogger log)
-```
+Open the *HttpExample.cs* project file and add the following classes:
++
+The `MyDocument` class defines an object that gets written to the database. The connection string for the Storage account is set by the `Connection` property. In this case, you could omit `Connection` because you are already using the default storage account.
+
+The `MultiResponse` class allows you to both write to the specified collection in the Azure Cosmos DB and return an HTTP success message. Because you need to return a `MultiResponse` object, you need to also update the method signature.
+++
+Specific attributes specify the name of the container and the name of its parent database. The connection string for your Azure Cosmos DB account is set by the `CosmosDbConnectionString`.
::: zone-end
A binding is added to the `bindings` array in your function.json, which should l
::: zone pivot="programming-language-csharp"
+# [In-process](#tab/in-process)
+ Add code that uses the `documentsOut` output binding object to create a JSON document. Add this code before the method returns. ```csharp
public static async Task<IActionResult> Run(
} ```
+# [Isolated process](#tab/isolated-process)
+
+Replace the existing Run method with the following code:
++++ ::: zone-end ::: zone pivot="programming-language-javascript"
module.exports = async function (context, req) {
} ```
+This code now returns a `MultiResponse` object that contains both a document and an HTTP response.
+ ::: zone-end + ## Run the function locally 1. As in the previous article, press <kbd>F5</kbd> to start the function app project and Core Tools.
module.exports = async function (context, req) {
1. In **Enter request body** you see the request message body value of `{ "name": "Azure" }`. Press Enter to send this request message to your function. 1. After a response is returned, press <kbd>Ctrl + C</kbd> to stop Core Tools. ### Verify that a JSON document has been created
azure-functions Functions Add Output Binding Storage Queue Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-add-output-binding-storage-queue-cli.md
Before you begin, you must complete the article, [Quickstart: Create an Azure Fu
[!INCLUDE [functions-cli-get-storage-connection](../../includes/functions-cli-get-storage-connection.md)]
+## Register binding extensions
+ [!INCLUDE [functions-register-storage-binding-extension-csharp](../../includes/functions-register-storage-binding-extension-csharp.md)] [!INCLUDE [functions-add-output-binding-cli](../../includes/functions-add-output-binding-cli.md)]
azure-functions Functions Add Output Binding Storage Queue Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-add-output-binding-storage-queue-vs-code.md
Title: Connect Azure Functions to Azure Storage using Visual Studio Code
-description: Learn how to connect Azure Functions to an Azure Storage queue by adding an output binding to your Visual Studio Code project.
+description: Learn how to connect Azure Functions to a Azure Queue Storage by adding an output binding to your Visual Studio Code project.
Last updated 02/07/2020
Extension bundles usage is enabled in the host.json file at the root of the proj
::: zone pivot="programming-language-csharp"
-With the exception of HTTP and timer triggers, bindings are implemented as extension packages. Run the following [dotnet add package](/dotnet/core/tools/dotnet-add-package) command in the Terminal window to add the Storage extension package to your project.
-
-```bash
-dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage
-```
::: zone-end
After the binding is defined, you can use the `name` of the binding to access it
::: zone-end ++ ## Run the function locally 1. As in the previous article, press <kbd>F5</kbd> to start the function app project and Core Tools.
After the binding is defined, you can use the `name` of the binding to access it
Because you are using the storage connection string, your function connects to the Azure storage account when running locally. A new queue named **outqueue** is created in your storage account by the Functions runtime when the output binding is first used. You'll use Storage Explorer to verify that the queue was created along with the new message. + ### Connect Storage Explorer to your account Skip this section if you have already installed Azure Storage Explorer and connected it to your Azure account.
azure-functions Functions Add Output Binding Storage Queue Vs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-add-output-binding-storage-queue-vs.md
Because you're using a Queue storage output binding, you need the Storage bindin
1. In the console, run the following [Install-Package](/nuget/tools/ps-ref-install-package) command to install the Storage extensions:
- ```Command
- Install-Package Microsoft.Azure.WebJobs.Extensions.Storage -Version 3.0.6
- ````
+ # [In-process](#tab/in-process)
+ ```bash
+ Install-Package Microsoft.Azure.WebJobs.Extensions.Storage
+ ```
+ # [Isolated process](#tab/isolated-process)
+ ```bash
+ Install-Package Microsoft.Azure.Functions.Worker.Extensions.Storage.Queues -IncludePrerelease
+ ```
+
Now, you can add the storage output binding to your project.
azure-functions Functions Create Function Linux Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-function-linux-custom-image.md
You can follow this tutorial on any computer running Windows, macOS, or Linux.
## Create and test the local functions project ::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-typescript,programming-language-powershell,programming-language-python"
-In a terminal or command prompt, run the following command for your chosen language to create a function app project in a folder named `LocalFunctionsProject`.
+In a terminal or command prompt, run the following command for your chosen language to create a function app project in the current folder.
::: zone-end ::: zone pivot="programming-language-csharp" +
+# [In-process](#tab/in-process)
+```console
+func init --worker-runtime dotnet --docker
+```
+
+# [Isolated process](#tab/isolated-process)
```console
-func init LocalFunctionsProject --worker-runtime dotnet --docker
+func init --worker-runtime dotnet-isolated --docker
```+ ::: zone-end ::: zone pivot="programming-language-javascript" ```console
-func init LocalFunctionsProject --worker-runtime node --language javascript --docker
+func init --worker-runtime node --language javascript --docker
``` ::: zone-end ::: zone pivot="programming-language-powershell" ```console
-func init LocalFunctionsProject --worker-runtime powershell --docker
+func init --worker-runtime powershell --docker
``` ::: zone-end ::: zone pivot="programming-language-python" ```console
-func init LocalFunctionsProject --worker-runtime python --docker
+func init --worker-runtime python --docker
``` ::: zone-end ::: zone pivot="programming-language-typescript" ```console
-func init LocalFunctionsProject --worker-runtime node --language typescript --docker
+func init --worker-runtime node --language typescript --docker
``` ::: zone-end ::: zone pivot="programming-language-java"
Maven creates the project files in a new folder with a name of _artifactId_, whi
::: zone pivot="programming-language-other" ```console
-func init LocalFunctionsProject --worker-runtime custom --docker
+func init --worker-runtime custom --docker
``` ::: zone-end The `--docker` option generates a `Dockerfile` for the project, which defines a suitable custom container for use with Azure Functions and the selected runtime. Navigate into the project folder:+ ```console
-cd LocalFunctionsProject
+cd fabrikam-functions
``` ::: zone-end
-```console
-cd fabrikam-functions
+
+# [In-process](#tab/in-process)
+No changes are needed to the Dockerfile.
+# [Isolated process](#tab/isolated-process)
+Open the Dockerfile and add the following lines after the first `FROM` statement, if not already present:
+
+```docker
+# Build requires 3.1 SDK
+COPY --from=mcr.microsoft.com/dotnet/core/sdk:3.1 /usr/share/dotnet /usr/share/dotnet
```+ ::: zone-end ::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-typescript,programming-language-powershell,programming-language-python"
-Add a function to your project by using the following command, where the `--name` argument is the unique name of your function and the `--template` argument specifies the function's trigger. `func new` creates a subfolder matching the function name that contains a code file appropriate to the project's chosen language and a configuration file named *function.json*.
+Add a function to your project by using the following command, where the `--name` argument is the unique name of your function and the `--template` argument specifies the function's trigger. `func new` creates a C# code file in your project.
```console
-func new --name HttpExample --template "HTTP trigger"
+func new --name HttpExample --template "HTTP trigger" --authlevel anonymous
``` ::: zone-end Add a function to your project by using the following command, where the `--name` argument is the unique name of your function and the `--template` argument specifies the function's trigger. `func new` creates a subfolder matching the function name that contains a configuration file named *function.json*. ```console
-func new --name HttpExample --template "HTTP trigger"
+func new --name HttpExample --template "HTTP trigger" --authlevel anonymous
```- In a text editor, create a file in the project folder named *handler.R*. Add the following as its content. ```r
To test the build, run the image in a local container using the [docker run](htt
docker run -p 8080:80 -it <docker_id>/azurefunctionsimage:v1.0.0 ```
-Once the image is running in a local container, open a browser to `http://localhost:8080`, which should display the placeholder image shown below. The image appears at this point because your function is running in the local container, as it would in Azure, which means that it's protected by an access key as defined in *function.json* with the `"authLevel": "function"` property. The container hasn't yet been published to a function app in Azure, however, so the key isn't yet available. If you want to test against the local container, stop docker, change the authorization property to `"authLevel": "anonymous"`, rebuild the image, and restart docker. Then reset `"authLevel": "function"` in *function.json*. For more information, see [authorization keys](functions-bindings-http-webhook-trigger.md#authorization-keys).
-
-![Placeholder image indicating that the container is running locally](./media/functions-create-function-linux-custom-image/run-image-local-success.png)
+# [In-process](#tab/in-process)
+After the image starts in the local container, browse to `http://localhost:8080/api/HttpExample?name=Functions`, which should display the same "hello" message as before. Because the HTTP triggered function you created uses anonymous authorization, you can call the function running in the container without having to obtain an access key. To learn more, see [authorization keys].
+# [Isolated process](#tab/isolated-process)
+After the image starts in the local container, browse to `http://localhost:8080/api/HttpExample`, which should display the same greeting message as before. Because the HTTP triggered function you created uses anonymous authorization, you can call the function running in the container without having to obtain an access key. To learn more, see [authorization keys].
+ ::: zone-end
-Once the image is running in a local container, browse to `http://localhost:8080/api/HttpExample?name=Functions`, which should display the same "hello" message as before. Because the Maven archetype generates an HTTP triggered function that uses anonymous authorization, you can still call the function even though it's running in the container.
+After the image starts in the local container, browse to `http://localhost:8080/api/HttpExample?name=Functions`, which should display the same "hello" message as before. Because the HTTP triggered function you created uses anonymous authorization, you can call the function running in the container without having to obtain an access key. To learn more, see [authorization keys].
::: zone-end After you've verified the function app in the container, stop docker with **Ctrl**+**C**.
Docker Hub is a container registry that hosts images and provides image and cont
## Create supporting Azure resources for your function
-To deploy your function code to Azure, you need to create three resources:
+Before you can deploy your function code to Azure, you need to create three resources:
-- A resource group, which is a logical container for related resources.-- An Azure Storage account, which maintains state and other information about your projects.
+- A [resource group](../azure-resource-manager/management/overview.md), which is a logical container for related resources.
+- A [Storage account](../storage/common/storage-account-create.md), which is used to maintain state and other information about your functions.
- A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment, and sharing of resources.
-You use Azure CLI commands to create these items. Each command provides JSON output upon completion.
+Use the following commands to create these items. Both Azure CLI and PowerShell are supported.
-1. Sign in to Azure with the [az login](/cli/azure/reference-index#az_login) command:
+1. If you haven't done so already, sign in to Azure:
+ # [Azure CLI](#tab/azure-cli)
```azurecli az login ```
-
-1. Create a resource group with the [az group create](/cli/azure/group#az_group_create) command. The following example creates a resource group named `AzureFunctionsContainers-rg` in the `westeurope` region. (You generally create your resource group and resources in a region near you, using an available region from the `az account list-locations` command.)
- ```azurecli
- az group create --name AzureFunctionsContainers-rg --location westeurope
+ The [az login](/cli/azure/reference-index#az_login) command signs you into your Azure account.
+
+ # [Azure PowerShell](#tab/azure-powershell)
+ ```azurepowershell
+ Connect-AzAccount
```
-
- > [!NOTE]
- > You can't host Linux and Windows apps in the same resource group. If you have an existing resource group named `AzureFunctionsContainers-rg` with a Windows function app or web app, you must use a different resource group.
-
-1. Create a general-purpose storage account in your resource group and region by using the [az storage account create](/cli/azure/storage/account#az_storage_account_create) command. In the following example, replace `<storage_name>` with a globally unique name appropriate to you. Names must contain three to 24 characters numbers and lowercase letters only. `Standard_LRS` specifies a typical general-purpose account.
+ The [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet signs you into your Azure account.
+
+
+
+1. Create a resource group named `AzureFunctionsContainers-rg` in your chosen region:
+
+ # [Azure CLI](#tab/azure-cli)
+
```azurecli
- az storage account create --name <storage_name> --location westeurope --resource-group AzureFunctionsContainers-rg --sku Standard_LRS
+ az group create --name AzureFunctionsContainers-rg --location <REGION>
```
-
- The storage account incurs only a few USD cents for this tutorial.
-
-1. Use the command to create a Premium plan for Azure Functions named `myPremiumPlan` in the **Elastic Premium 1** pricing tier (`--sku EP1`), in the West Europe region (`-location westeurope`, or use a suitable region near you), and in a Linux container (`--is-linux`).
+
+ The [az group create](/cli/azure/group#az_group_create) command creates a resource group. In the above command, replace `<REGION>` with a region near you, using an available region code returned from the [az account list-locations](/cli/azure/account#az_account_list_locations) command.
- ```azurecli
- az functionapp plan create --resource-group AzureFunctionsContainers-rg --name myPremiumPlan --location westeurope --number-of-workers 1 --sku EP1 --is-linux
- ```
+ # [Azure PowerShell](#tab/azure-powershell)
- We use the Premium plan here, which can scale as needed. To learn more about hosting, see [Azure Functions hosting plans comparison](functions-scale.md). To calculate costs, see the [Functions pricing page](https://azure.microsoft.com/pricing/details/functions/).
+ ```azurepowershell
+ New-AzResourceGroup -Name AzureFunctionsContainers-rg -Location <REGION>
+ ```
- The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
+ The [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) command creates a resource group. You generally create your resource group and resources in a region near you, using an available region returned from the [Get-AzLocation](/powershell/module/az.resources/get-azlocation) cmdlet.
-## Create and configure a function app on Azure with the image
+
-A function app on Azure manages the execution of your functions in your hosting plan. In this section, you use the Azure resources from the previous section to create a function app from an image on Docker Hub and configure it with a connection string to Azure Storage.
+1. Create a general-purpose storage account in your resource group and region:
-1. Create the Functions app using the [az functionapp create](/cli/azure/functionapp#az_functionapp_create) command. In the following example, replace `<storage_name>` with the name you used in the previous section for the storage account. Also replace `<app_name>` with a globally unique name appropriate to you, and `<docker_id>` with your Docker ID.
+ # [Azure CLI](#tab/azure-cli)
- ::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-typescript,programming-language-powershell,programming-language-python,programming-language-java"
- ```azurecli
- az functionapp create --name <app_name> --storage-account <storage_name> --resource-group AzureFunctionsContainers-rg --plan myPremiumPlan --runtime <functions runtime stack> --deployment-container-image-name <docker_id>/azurefunctionsimage:v1.0.0
- ```
- ::: zone-end
- ::: zone pivot="programming-language-other"
```azurecli
- az functionapp create --name <app_name> --storage-account <storage_name> --resource-group AzureFunctionsContainers-rg --plan myPremiumPlan --runtime custom --deployment-container-image-name <docker_id>/azurefunctionsimage:v1.0.0
+ az storage account create --name <STORAGE_NAME> --location <REGION> --resource-group AzureFunctionsContainers-rg --sku Standard_LRS
```
- ::: zone-end
-
- The *deployment-container-image-name* parameter specifies the image to use for the function app. You can use the [az functionapp config container show](/cli/azure/functionapp/config/container#az_functionapp_config_container_show) command to view information about the image used for deployment. You can also use the [az functionapp config container set](/cli/azure/functionapp/config/container#az_functionapp_config_container_set) command to deploy from a different image.
-
- > [!TIP]
- > You can use the [`DisableColor` setting](functions-host-json.md#console) in the host.json file to prevent ANSI control characters from being written to the container logs.
-1. Display the connection string for the storage account you created by using the [az storage account show-connection-string](/cli/azure/storage/account) command. Replace `<storage-name>` with the name of the storage account you created above:
+ The [az storage account create](/cli/azure/storage/account#az_storage_account_create) command creates the storage account.
- ```azurecli
- az storage account show-connection-string --resource-group AzureFunctionsContainers-rg --name <storage_name> --query connectionString --output tsv
- ```
-
-1. Add this setting to the function app by using the [az functionapp config appsettings set](/cli/azure/functionapp/config/appsettings#az_functionapp_config_ppsettings_set) command. In the following command, replace `<app_name>` with the name of your function app, and replace `<connection_string>` with the connection string from the previous step (a long encoded string that begins with "DefaultEndpointProtocol="):
-
- ```azurecli
- az functionapp config appsettings set --name <app_name> --resource-group AzureFunctionsContainers-rg --settings AzureWebJobsStorage=<connection_string>
- ```
+ # [Azure PowerShell](#tab/azure-powershell)
- > [!TIP]
- > In Bash, you can use a shell variable to capture the connection string instead of using the clipboard. First, use the following command to create a variable with the connection string:
- >
- > ```bash
- > storageConnectionString=$(az storage account show-connection-string --resource-group AzureFunctionsContainers-rg --name <storage_name> --query connectionString --output tsv)
- > ```
- >
- > Then refer to the variable in the second command:
- >
- > ```azurecli
- > az functionapp config appsettings set --name <app_name> --resource-group AzureFunctionsContainers-rg --settings AzureWebJobsStorage=$storageConnectionString
- > ```
+ ```azurepowershell
+ New-AzStorageAccount -ResourceGroupName AzureFunctionsContainers-rg -Name <STORAGE_NAME> -SkuName Standard_LRS -Location <REGION>
+ ```
-1. The function can now use this connection string to access the storage account.
+ The [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) cmdlet creates the storage account.
-> [!NOTE]
-> If you publish your custom image to a private container account, you should use environment variables in the Dockerfile for the connection string instead. For more information, see the [ENV instruction](https://docs.docker.com/engine/reference/builder/#env). You should also set the variables `DOCKER_REGISTRY_SERVER_USERNAME` and `DOCKER_REGISTRY_SERVER_PASSWORD`. To use the values, then, you must rebuild the image, push the image to the registry, and then restart the function app on Azure.
+
-## Verify your functions on Azure
+ In the previous example, replace `<STORAGE_NAME>` with a name that is appropriate to you and unique in Azure Storage. Names must contain three to 24 characters numbers and lowercase letters only. `Standard_LRS` specifies a general-purpose account, which is [supported by Functions](storage-considerations.md#storage-account-requirements).
+
+1. Use the command to create a Premium plan for Azure Functions named `myPremiumPlan` in the **Elastic Premium 1** pricing tier (`--sku EP1`), in your `<REGION>`, and in a Linux container (`--is-linux`).
-With the image deployed to the function app on Azure, you can now invoke the function through HTTP requests. Because the *function.json* definition includes the property `"authLevel": "function"`, you must first obtain the access key (also called the "function key") and include it as a URL parameter in any requests to the endpoint.
+ # [Azure CLI](#tab/azure-cli)
+ ```azurecli
+ az functionapp plan create --resource-group AzureFunctionsContainers-rg --name myPremiumPlan --location <REGION> --number-of-workers 1 --sku EP1 --is-linux
+ ```
+ # [Azure PowerShell](#tab/azure-powershell)
+ ```powershell
+ New-AzFunctionAppPlan -ResourceGroupName AzureFunctionsContainers-rg -Name MyPremiumPlan -Location <REGION> -Sku EP1 -WorkerType Linux
+ ```
+
+ We use the Premium plan here, which can scale as needed. To learn more about hosting, see [Azure Functions hosting plans comparison](functions-scale.md). To calculate costs, see the [Functions pricing page](https://azure.microsoft.com/pricing/details/functions/).
-1. Retrieve the function URL with the access (function) key by using the Azure portal, or by using the Azure CLI with the `az rest` command.)
+ The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
- # [Portal](#tab/portal)
+## Create and configure a function app on Azure with the image
- 1. Sign in to the Azure portal, then search for and select **Function App**.
+A function app on Azure manages the execution of your functions in your hosting plan. In this section, you use the Azure resources from the previous section to create a function app from an image on Docker Hub and configure it with a connection string to Azure Storage.
- 1. Select the function you want to verify.
+1. Create a functions app using the following command:
- 1. In the left navigation panel, select **Functions**, and then select the function you want to verify.
+ # [Azure CLI](#tab/azure-cli)
+ ```azurecli
+ az functionapp create --name <APP_NAME> --storage-account <STORAGE_NAME> --resource-group AzureFunctionsContainers-rg --plan myPremiumPlan --deployment-container-image-name <DOCKER_ID>/azurefunctionsimage:v1.0.0
+ ```
- ![Choose your function in the Azure portal](./media/functions-create-function-linux-custom-image/functions-portal-select-function.png)
+ In the [az functionapp create](/cli/azure/functionapp#az_functionapp_create) command, the *deployment-container-image-name* parameter specifies the image to use for the function app. You can use the [az functionapp config container show](/cli/azure/functionapp/config/container#az_functionapp_config_container_show) command to view information about the image used for deployment. You can also use the [az functionapp config container set](/cli/azure/functionapp/config/container#az_functionapp_config_container_set) command to deploy from a different image.
+ # [Azure PowerShell](#tab/azure-powershell)
+ ```azurepowershell
+ New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsContainers-rg -PlanName myPremiumPlan -StorageAccount <STORAGE_NAME> -DockerImageName <DOCKER_ID>/azurefunctionsimage:v1.0.0
+ ```
+
- 1. Select **Get Function Url**.
+ In this example, replace `<STORAGE_NAME>` with the name you used in the previous section for the storage account. Also replace `<APP_NAME>` with a globally unique name appropriate to you, and `<DOCKER_ID>` with your DockerHub ID.
+
+ > [!TIP]
+ > You can use the [`DisableColor` setting](functions-host-json.md#console) in the host.json file to prevent ANSI control characters from being written to the container logs.
- ![Get the function URL from the Azure portal](./media/functions-create-function-linux-custom-image/functions-portal-get-function-url.png)
+1. Use the following command to get the connection string for the storage account you created:
-
- 1. In the pop-up window, select **default (function key)** and then copy the URL to the clipboard. The key is the string of characters following `?code=`.
+ # [Azure CLI](#tab/azure-cli)
+ ```azurecli
+ az storage account show-connection-string --resource-group AzureFunctionsContainers-rg --name <STORAGE_NAME> --query connectionString --output tsv
+ ```
- ![Choose the default function access key](./media/functions-create-function-linux-custom-image/functions-portal-copy-url.png)
+ The connection string for the storage account is returned by using the [az storage account show-connection-string](/cli/azure/storage/account) command.
+ # [Azure PowerShell](#tab/azure-powershell)
+ ```azurepowershell
+ $storage_name = "glengagtestdockerstorage"
+ $key = (Get-AzStorageAccountKey -ResourceGroupName AzureFunctionsContainers-rg -Name $storage_name)[0].Value
+ $string = "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=" + $storage_name + ";AccountKey=" + $key
+ Write-Output($string)
+ ```
+ The key returned by the [Get-AzStorageAccountKey](/powershell/module/az.storage/get-azstorageaccountkey) cmdlet is used to construct the connection string for the storage account.
- > [!NOTE]
- > Because your function app is deployed as a container, you can't make changes to your function code in the portal. You must instead update the project in the local image, push the image to the registry again, and then redeploy to Azure. You can set up continuous deployment in a later section.
-
- # [Azure CLI](#tab/azurecli)
+
- 1. Construct a URL string in the following format, replacing `<subscription_id>`, `<resource_group>`, and `<app_name>` with your Azure subscription ID, the resource group of your function app, and the name of your function app, respectively:
+ Replace `<STORAGE_NAME>` with the name of the storage account you created previously.
- ```
- "/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.Web/sites/<app_name>/host/default/listKeys?api-version=2018-11-01"
- ```
+1. Add this setting to the function app by using the following command:
+
+ # [Azure CLI](#tab/azure-cli)
+ ```azurecli
+ az functionapp config appsettings set --name <APP_NAME> --resource-group AzureFunctionsContainers-rg --settings AzureWebJobsStorage=<CONNECTION_STRING>
+ ```
+ The [az functionapp config appsettings set](/cli/azure/functionapp/config/appsettings#az_functionapp_config_ppsettings_set) command creates the setting.
- For example, the URL might look the following address:
+ # [Azure PowerShell](#tab/azure-powershell)
+ ```azurepowershell
+ Update-AzFunctionAppSetting -Name <APP_NAME> -ResourceGroupName AzureFunctionsContainers-rg -AppSetting @{"AzureWebJobsStorage"="<CONNECTION_STRING>"}
+ ```
+ The [Update-AzFunctionAppSetting](/powershell/module/az.functions/update-azfunctionappsetting) cmdlet creates the setting.
- ```
- "/subscriptions/1234aaf4-1234-abcd-a79a-245ed34eabcd/resourceGroups/AzureFunctionsContainers-rg/providers/Microsoft.Web/sites/msdocsfunctionscontainer/host/default/listKeys?api-version=2018-11-01"
- ```
+
- > [!TIP]
- > For convenience, you can instead assign the URL to an environment variable and use it in the `az rest` command.
-
- 1. Run the following `az rest` command (available in the Azure CLI version 2.0.77 and later), replacing `<uri>` with the URI string from the last step, including the quotes:
+ In this command, replace `<APP_NAME>` with the name of your function app and `<CONNECTION_STRING>` with the connection string from the previous step. The connection should be a long encoded string that begins with `DefaultEndpointProtocol=`.
+
- ```azurecli
- az rest --method post --uri <uri> --query functionKeys.default --output tsv
- ```
+1. The function can now use this connection string to access the storage account.
- 1. The output of the command is the function key. The full function URL is then `https://<app_name>.azurewebsites.net/api/<function_name>?code=<key>`, replacing `<app_name>`, `<function_name>`, and `<key>` with your specific values.
-
- > [!NOTE]
- > The key retrieved here is the *host* key that works for all functions in the functions app; the method shown for the portal retrieves the key for the one function only.
+> [!NOTE]
+> If you publish your custom image to a private container account, you should use environment variables in the Dockerfile for the connection string instead. For more information, see the [ENV instruction](https://docs.docker.com/engine/reference/builder/#env). You should also set the variables `DOCKER_REGISTRY_SERVER_USERNAME` and `DOCKER_REGISTRY_SERVER_PASSWORD`. To use the values, then, you must rebuild the image, push the image to the registry, and then restart the function app on Azure.
-
+## Verify your functions on Azure
-1. Paste the function URL into your browser's address bar, adding the parameter `&name=Azure` to the end of this URL. Text like "Hello, Azure" should appear in the browser.
+With the image deployed to your function app in Azure, you can now invoke the function as before through HTTP requests.
+In your browser, navigate to a URL like the following:
- ![Function response in the browser.](./media/functions-create-function-linux-custom-image/function-app-browser-testing.png)
+`https://<APP_NAME>.azurewebsites.net/api/HttpExample?name=Functions`
+# [In-process](#tab/in-process)
+`https://<APP_NAME>.azurewebsites.net/api/HttpExample?name=Functions`
+# [Isolated process](#tab/isolated-process)
+`https://<APP_NAME>.azurewebsites.net/api/HttpExample`
-1. To test authorization, remove the `code=` parameter from the URL and verify that you get no response from the function.
+
+Replace `<APP_NAME>` with the name of your function app. When you navigate to this URL, the browser should display similar output as when you ran the function locally.
## Enable continuous deployment to Azure You can enable Azure Functions to automatically update your deployment of an image whenever you update the image in the registry.
-1. Enable continuous deployment by using [az functionapp deployment container config](/cli/azure/functionapp/deployment/container#az_functionapp_deployment_container_config) command, replacing `<app_name>` with the name of your function app:
+1. Enable continuous deployment and get the webhook URL by using the following commands:
+ # [Azure CLI](#tab/azure-cli)
```azurecli
- az functionapp deployment container config --enable-cd --query CI_CD_URL --output tsv --name <app_name> --resource-group AzureFunctionsContainers-rg
+ az functionapp deployment container config --enable-cd --query CI_CD_URL --output tsv --name <APP_NAME> --resource-group AzureFunctionsContainers-rg
+ ```
+
+ The [az functionapp deployment container config](/cli/azure/functionapp/deployment/container#az_functionapp_deployment_container_config) command enables continuous deployment and returns the deployment webhook URL. You can retrieve this URL at any later time by using the [az functionapp deployment container show-cd-url](/cli/azure/functionapp/deployment/container#az_functionapp_deployment_container_show_cd_url) command.
+
+ # [Azure PowerShell](#tab/azure-powershell)
+ ```azurepowershell
+ Update-AzFunctionAppSetting -Name <APP_NAME> -ResourceGroupName AzureFunctionsContainers-rg -AppSetting @{"DOCKER_ENABLE_CI" = "true"}
+ Get-AzWebAppContainerContinuousDeploymentUrl -Name <APP_NAME> -ResourceGroupName AzureFunctionsContainers-rg
```
- This command enables continuous deployment and returns the deployment webhook URL. (You can retrieve this URL at any later time by using the [az functionapp deployment container show-cd-url](/cli/azure/functionapp/deployment/container#az_functionapp_deployment_container_show_cd_url) command.)
+ The `DOCKER_ENABLE_CI` application setting controls whether continuous deployment is enabled from the container repository. The [Get-AzWebAppContainerContinuousDeploymentUrl](/powershell/module/az.websites/get-azwebappcontainercontinuousdeploymenturl) cmdlet returns the URL of the deployment webhook.
+
+
+
+ As before, replace `<APP_NAME>` with your function app name.
1. Copy the deployment webhook URL to the clipboard.
SSH enables secure communication between a container and a client. With SSH enab
::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-typescript,programming-language-powershell,programming-language-python,programming-language-java"
-## Write to an Azure Storage queue
+## Write to Azure Queue Storage
Azure Functions lets you connect your functions to other Azure services and resources without having to write your own integration code. These *bindings*, which represent both input and output, are declared within the function definition. Data from bindings is provided to the function as parameters. A *trigger* is a special type of input binding. Although a function has only one trigger, it can have multiple input and output bindings. To learn more, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md).
-This section shows you how to integrate your function with an Azure Storage queue. The output binding that you add to this function writes data from an HTTP request to a message in the queue.
+This section shows you how to integrate your function with an Azure Queue Storage. The output binding that you add to this function writes data from an HTTP request to a message in the queue.
[!INCLUDE [functions-cli-get-storage-connection](../../includes/functions-cli-get-storage-connection.md)] ::: zone-end
+## Register binding extensions
+ [!INCLUDE [functions-register-storage-binding-extension-csharp](../../includes/functions-register-storage-binding-extension-csharp.md)] [!INCLUDE [functions-add-output-binding-cli](../../includes/functions-add-output-binding-cli.md)]
This section shows you how to integrate your function with an Azure Storage queu
## Add code to use the output binding
-With the queue binding defined, you can now update your function to receive the `msg` output parameter and write messages to the queue.
+With the queue binding defined, you can now update your function to write messages to the queue using the binding parameter.
::: zone-end ::: zone pivot="programming-language-python"
az group delete --name AzureFunctionsContainer-rg
+ [Monitoring functions](functions-monitoring.md) + [Scale and hosting options](functions-scale.md) + [Kubernetes-based serverless hosting](functions-kubernetes-keda.md)+
+[authorization keys]: functions-bindings-http-webhook-trigger.md#authorization-keys
azure-functions Functions Create Your First Function Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-create-your-first-function-visual-studio.md
In this article, you learn how to:
> * Run your code locally to verify function behavior. > * Deploy your code project to Azure Functions.
+This article supports creating both types of compiled C# functions:
+++ [In-process](functions-create-your-first-function-visual-studio.md?tabs=in-process) - runs in the same process as the Functions host process. To learn more, see [Develop C# class library functions using Azure Functions](functions-dotnet-class-library.md).++ [Isolated process](functions-create-your-first-function-visual-studio.md?tabs=isolated-process) - runs in a separate .NET worker process. To learn more, see [Guide for running functions on .NET 5.0 in Azure](dotnet-isolated-process-guide.md).+ Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
-
-The project you create runs on .NET Core 3.1. If you instead want to create a project that runs on .NET 5.0, see [Develop and publish .NET 5 functions using Azure Functions](dotnet-isolated-process-developer-howtos.md).
+
+There is also a [Visual Studio Code-based version](create-first-function-vs-code-csharp.md) of this article.
## Prerequisites
The `FunctionName` method attribute sets the name of the function, which by defa
Your function definition should now look like the following code:
-
+# [In-process](#tab/in-process)
++
+# [Isolated process](#tab/isolated-process)
++++ Now that you've renamed the function, you can test it on your local computer. ## Run the function locally
azure-functions Functions Develop Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-develop-local.md
The way in which you develop functions on your local computer depends on your [l
|Environment |Languages |Description| |--|||
-|[Visual Studio Code](functions-develop-vs-code.md)| [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-developer-howtos.md?pivots=development-environment-vscode)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](./create-first-function-vs-code-powershell.md)<br/>[Python](functions-reference-python.md) | The [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) adds Functions support to VS Code. Requires the Core Tools. Supports development on Linux, macOS, and Windows, when using version 2.x of the Core Tools. To learn more, see [Create your first function using Visual Studio Code](./create-first-function-vs-code-csharp.md). |
-| [Command prompt or terminal](functions-run-local.md) | [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-developer-howtos.md?pivots=development-environment-cli)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](functions-reference-powershell.md)<br/>[Python](functions-reference-python.md) | [Azure Functions Core Tools] provides the core runtime and templates for creating functions, which enable local development. Version 2.x supports development on Linux, macOS, and Windows. All environments rely on Core Tools for the local Functions runtime. |
-| [Visual Studio 2019](functions-develop-vs.md) | [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-developer-howtos.md?pivots=development-environment-vs) | The Azure Functions tools are included in the **Azure development** workload of [Visual Studio 2019](https://www.visualstudio.com/vs/) and later versions. Lets you compile functions in a class library and publish the .dll to Azure. Includes the Core Tools for local testing. To learn more, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md). |
+|[Visual Studio Code](functions-develop-vs-code.md)| [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](./create-first-function-vs-code-powershell.md)<br/>[Python](functions-reference-python.md) | The [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) adds Functions support to VS Code. Requires the Core Tools. Supports development on Linux, macOS, and Windows, when using version 2.x of the Core Tools. To learn more, see [Create your first function using Visual Studio Code](./create-first-function-vs-code-csharp.md). |
+| [Command prompt or terminal](functions-run-local.md) | [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-guide.md)<br/>[JavaScript](functions-reference-node.md)<br/>[PowerShell](functions-reference-powershell.md)<br/>[Python](functions-reference-python.md) | [Azure Functions Core Tools] provides the core runtime and templates for creating functions, which enable local development. Version 2.x supports development on Linux, macOS, and Windows. All environments rely on Core Tools for the local Functions runtime. |
+| [Visual Studio 2019](functions-develop-vs.md) | [C# (class library)](functions-dotnet-class-library.md)<br/>[C# isolated process (.NET 5.0)](dotnet-isolated-process-guide.md) | The Azure Functions tools are included in the **Azure development** workload of [Visual Studio 2019](https://www.visualstudio.com/vs/) and later versions. Lets you compile functions in a class library and publish the .dll to Azure. Includes the Core Tools for local testing. To learn more, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md). |
| [Maven](./create-first-function-cli-java.md) (various) | [Java](functions-reference-java.md) | Maven archetype supports Core Tools to enable development of Java functions. Version 2.x supports development on Linux, macOS, and Windows. To learn more, see [Create your first function with Java and Maven](./create-first-function-cli-java.md). Also supports development using [Eclipse](functions-create-maven-eclipse.md) and [IntelliJ IDEA](functions-create-maven-intellij.md) | [!INCLUDE [Don't mix development environments](../../includes/functions-mixed-dev-environments.md)]
The following application settings can be included in the **`Values`** array whe
+ To learn more about local development of compiled C# functions using Visual Studio 2019, see [Develop Azure Functions using Visual Studio](functions-develop-vs.md). + To learn more about local development of functions using VS Code on a Mac, Linux, or Windows computer, see the Visual Studio Code getting started article for your preferred language: + [C# class library](create-first-function-vs-code-csharp.md)
- + [C# isolated process (.NET 5.0)](dotnet-isolated-process-developer-howtos.md?pivots=development-environment-vscode)
+ + [C# isolated process (.NET 5.0)](create-first-function-vs-code-csharp.md?tabs=isolated-process)
+ [Java](create-first-function-vs-code-java.md) + [JavaScript](create-first-function-vs-code-node.md) + [PowerShell](create-first-function-vs-code-powershell.md)
azure-functions Functions How To Use Azure Function App Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md
To add a setting in the portal, select **New application setting** and add the n
![Function app settings in the Azure portal.](./media/functions-how-to-use-azure-function-app-settings/azure-function-app-settings-tab.png)
-# [Azure CLI](#tab/azurecli)
+# [Azure CLI](#tab/azure-cli)
The [`az functionapp config appsettings list`](/cli/azure/functionapp/config/appsettings#az_functionapp_config_appsettings_list) command returns the existing application settings, as in the following example:
az functionapp config appsettings set --name <FUNCTION_APP_NAME> \
--settings CUSTOM_FUNCTION_APP_SETTING=12345 ```
-# [Azure PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
The [`Get-AzFunctionAppSetting`](/powershell/module/az.functions/get-azfunctionappsetting) cmdlet returns the existing application settings, as in the following example:
To determine the type of plan used by your function app, see **App Service plan*
![View scaling plan in the portal](./media/functions-scale/function-app-overview-portal.png)
-# [Azure CLI](#tab/azurecli)
+# [Azure CLI](#tab/azure-cli)
Run the following Azure CLI command to get your hosting plan type:
az appservice plan list --query "[?id=='$appServicePlanId'].sku.tier" --output t
In the previous example replace `<RESOURCE_GROUP>` and `<FUNCTION_APP_NAME>` with the resource group and function app names, respective.
-# [Azure PowerShell](#tab/powershell)
+# [Azure PowerShell](#tab/azure-powershell)
Run the following Azure PowerShell command to get your hosting plan type:
Use the following procedure to migrate from a Premium plan to a Consumption plan
az functionapp plan delete --name <PREMIUM_PLAN> --resource-group <MY_RESOURCE_GROUP> ```
+## Get your function access keys
+
+HTTP triggered functions can generally be called by using a URL in the format: `https://<APP_NAME>.azurewebsites.net/api/<FUNCTION_NAME>`. When the authorization to your function is set a value other than `anonymous`, you must also provide an access key in your request. The access key can either be provided in the URL using the `?code=` query string or in the request header. To learn more, see [Function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys). There are several ways to get your access keys.
+
+# [Portal](#tab/portal)
+
+1. Sign in to the Azure portal, then search for and select **Function App**.
+
+1. Select the function you want to verify.
+
+1. In the left navigation under **Functions**, select **App keys**.
+
+ This returns the host keys, which can be used to access any function in the app. It also returns the system key, which gives anyone administrator-level access to the all function app APIs.
+
+You can also practice least privilege by using the key just for the specific function key by selecting **Function keys** under **Developer** in your HTTP triggered function.
+
+# [Azure CLI](#tab/azure-cli)
+
+Run the following script in Azure Cloud Shell, the output of which is the [default (host) key](functions-bindings-http-webhook-trigger.md#authorization-scopes-function-level) that can be used to access any HTTP triggered function in the function app.
+
+```azurecli-interactive
+subName='<SUBSCRIPTION_ID>'
+resGroup=AzureFunctionsContainers-rg
+appName=glengagtestdocker
+path=/subscriptions/$subName/resourceGroups/$resGroup/providers/Microsoft.Web/sites/$appName/host/default/listKeys?api-version=2018-11-01
+az rest --method POST --uri $path --query functionKeys.default --output tsv
+```
+
+In this script, replace `<SUBSCRIPTION_ID>` and `<APP_NAME>` with the ID of your subscription and your function app name, respective. This script runs on Bash in Cloud Shell. It must be modified to run in a Windows command prompt.
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+Run the following script, the output of which is the [default (host) key](functions-bindings-http-webhook-trigger.md#authorization-scopes-function-level) that can be used to access any HTTP triggered function in the function app.
+
+```powershell-interactive
+$subName = '<SUBSCRIPTION_ID>'
+$rGroup = 'AzureFunctionsContainers-rg'
+$appName = '<APP_NAME>'
+$path = "/subscriptions/$subName/resourceGroups/$rGroup/providers/Microsoft.Web/sites/$appName/host/default/listKeys?api-version=2018-11-01"
+((Invoke-AzRestMethod -Path $path -Method POST).Content | ConvertFrom-JSON).functionKeys.default
+```
+
+In this script, replace `<SUBSCRIPTION_ID>` and `<APP_NAME>` with the ID of your subscription and your function app name, respective.
+++ ## Platform features Function apps run in, and are maintained by, the Azure App Service platform. As such, your function apps have access to most of the features of Azure's core web hosting platform. The left pane is where you access the many features of the App Service platform that you can use in your function apps.
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
ms.devlang: na
na Previously updated : 08/20/2021 Last updated : 08/24/2021 # Compare Azure Government and global Azure
The following features have known limitations in Azure Government:
- Limitations with Azure AD join: - Enterprise state roaming for Windows 10 devices is not available
+### [Azure Defender for IoT](../defender-for-iot/index.yml)
+
+For feature variations and limitations, see [Cloud feature availability for US Government customers](../security/fundamentals/feature-availability.md#azure-defender-for-iot).
+ ### [Azure Information Protection](/azure/information-protection/what-is-information-protection) Azure Information Protection Premium is part of the [Enterprise Mobility + Security](/enterprise-mobility-security) suite. For details on this service and how to use it, see the [Azure Information Protection Premium Government Service Description](/enterprise-mobility-security/solutions/ems-aip-premium-govt-service-description). ### [Azure Security Center](../security-center/security-center-introduction.md)
-The following Azure Security Center **features are not currently available** in Azure Government:
--- **1st and 3rd party integrations**
- - [Connect AWS account](../security-center/quickstart-onboard-aws.md)
- - [Connect GCP account](../security-center/quickstart-onboard-gcp.md)
- - [Integrated vulnerability assessment for machines (powered by Qualys)](../security-center/deploy-vulnerability-assessment-vm.md).
-
- > [!NOTE]
- > Security Center internal assessments are provided to discover security misconfigurations, based on Common Configuration Enumeration such as password policy, windows FW rules, local machine audit and security policy, and additional OS hardening settings.
--- **Threat detection**
- - [Azure Defender for App Service](../security-center/defender-for-app-service-introduction.md).
- - [Azure Defender for Key Vault](../security-center/defender-for-key-vault-introduction.md)
- - *Specific detections*: Detections based on VM log periodic batches, Azure core router network logs, and threat intelligence reports.
-
- > [!NOTE]
- > Near real-time alerts generated based on security events and raw data collected from the VMs are captured and displayed.
--- **Environment hardening**
- - [Adaptive network hardening](../security-center/security-center-adaptive-network-hardening.md)
--- **Preview features**
- - [Recommendation exemption rules](../security-center/exempt-resource.md)
- - [Azure Defender for Resource Manager](../security-center/defender-for-resource-manager-introduction.md)
- - [Azure Defender for DNS](../security-center/defender-for-dns-introduction.md)
-
-**Azure Security Center FAQ**
-
-For Azure Security Center FAQ, see [Azure Security Center frequently asked questions public documentation](../security-center/faq-general.yml). Extra FAQ for Azure Security Center in Azure Government is listed below.
-
-**What will customers be charged for Azure Security Center in Azure Government?**</br>
-Azure Security Center's integrated cloud workload protection platform (CWPP), Azure Defender, brings advanced, intelligent, protection of your Azure and hybrid resources and workloads. Azure Defender is free for the first 30 days. Should you choose to continue to use public preview or generally available features of Azure Defender beyond 30 days, we automatically start to charge for the service.
-
-**Is Azure Security Center available for DoD customers?**</br>
-Azure Security Center is deployed in Azure Government regions but not in Azure Government for DoD regions. Azure resources created in DoD regions can still utilize Security Center capabilities. However, using it will result in Security Center collected data being moved out from DoD regions and stored in Azure Government regions. By default, all Security Center features that collect and store data are disabled for resources hosted in DoD regions. The type of data collected and stored varies depending on the selected feature. If you want to enable Azure Security Center features for DoD resources, you are advised to consider data separation and protection requirements before doing so.
+For feature variations and limitations, see [Cloud feature availability for US Government customers](../security/fundamentals/feature-availability.md#azure-security-center).
### [Azure Sentinel](../sentinel/overview.md)
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
Title: Azure and other Microsoft cloud services compliance scope
-description: This article tracks FedRAMP, DoD, and ICD 503 compliance scope for Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services across Azure, Azure Government, and Azure Government Secret cloud environments.
+description: This article tracks FedRAMP and DoD compliance scope for Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services across Azure, Azure Government, and Azure Government Secret cloud environments.
Previously updated : 08/20/2021 Last updated : 08/24/2021 # Azure, Dynamics 365, Microsoft 365, and Power Platform services compliance scope
Microsoft Azure cloud environments meet demanding US government compliance requi
- [Federal Risk and Authorization Management Program](https://www.fedramp.gov/) (FedRAMP) - Department of Defense (DoD) Cloud Computing [Security Requirements Guide](https://dl.dod.cyber.mil/wp-content/uploads/cloud/SRG/https://docsupdatetracker.net/index.html) (SRG) Impact Level (IL) 2, 4, 5, and 6 - [Intelligence Community Directive (ICD) 503](http://www.dni.gov/files/documents/ICD/ICD_503.pdf)
+- [Joint Special Access Program (SAP) Implementation Guide (JSIG)](https://www.dcsa.mil/portals/91/documents/ctp/nao/JSIG_2016April11_Final_(53Rev4).pdf)
**Azure** (also known as Azure Commercial, Azure Public, or Azure Global) maintains the following authorizations:
For current Azure Government regions and available services, see [Products avail
**Azure Government Secret** maintains: - [DoD IL6](/azure/compliance/offerings/offering-dod-il6) PA issued by DISA-- [ICD 503](/azure/compliance/offerings/offering-icd-503) with facilities at ICD 705 (for authorization details, contact your Microsoft account representative)
+- [ICD 503](/azure/compliance/offerings/offering-icd-503) ATO with facilities at ICD 705 (for authorization details, contact your Microsoft account representative)
+- [JSIG PL3](/azure/compliance/offerings/offering-jsig) ATO (for authorization details, contact your Microsoft account representative)
-This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services in scope for the above authorizations across Azure, Azure Government, and Azure Government Secret cloud environments.
+**Azure Government Top Secret** maintains:
+
+- [ICD 503](/azure/compliance/offerings/offering-icd-503) ATO with facilities at ICD 705 (for authorization details, contact your Microsoft account representative)
+- [JSIG PL3](/azure/compliance/offerings/offering-jsig) ATO (for authorization details, contact your Microsoft account representative)
+
+This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and Power Platform cloud services in scope for FedRAMP High, DoD IL2, DoD IL4, DoD IL5, and DoD IL6 authorizations across Azure, Azure Government, and Azure Government Secret cloud environments. For other authorization details in Azure Government Secret and Azure Government Top Secret, contact your Microsoft account representative.
## Azure public services by audit scope *Last Updated: August 2021*
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
- &#x2705; = service is included in audit scope and has been authorized - Planned 2021 = service will undergo a FedRAMP High assessment in 2021 - once the service is authorized, status will be updated
-| Service | DoD IL2 | FedRAMP High | Planned 2021 |
-| - |:--:|::|::|
+| Service | FedRAMP High | DoD IL2 | Planned 2021 |
+| - |::|:-:|::|
| [API Management](https://azure.microsoft.com/services/api-management/) | &#x2705; | &#x2705; | | | [App Configuration](https://azure.microsoft.com/services/app-configuration/) | &#x2705; | &#x2705; | | | [Application Gateway](https://azure.microsoft.com/services/application-gateway/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Archive Storage](https://azure.microsoft.com/services/storage/archive/) | &#x2705; | &#x2705; | | | [Azure Backup](https://azure.microsoft.com/services/backup/) | &#x2705; | &#x2705; | | | [Azure Bastion](https://azure.microsoft.com/services/azure-bastion/) | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** |
| [Azure Blueprints](https://azure.microsoft.com/services/blueprints/) | &#x2705; | &#x2705; | | | [Azure Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; | | | [Azure Cache for Redis](https://azure.microsoft.com/services/cache/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) | &#x2705; | &#x2705; | | | [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) | &#x2705; | &#x2705; | | | [Azure Databricks](https://azure.microsoft.com/services/databricks/) **&ast;&ast;** | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** |
| [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | &#x2705; | &#x2705; | | | [Azure Dedicated HSM](https://azure.microsoft.com/services/azure-dedicated-hsm/) | &#x2705; | &#x2705; | |
+| [Azure Defender for IoT](https://azure.microsoft.com/services/azure-defender-for-iot/) (formerly Azure Security for IoT) | &#x2705; | &#x2705; | |
| [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab/) | &#x2705; | &#x2705; | | | [Azure DNS](https://azure.microsoft.com/services/dns/) | &#x2705; | &#x2705; | | | [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Functions](https://azure.microsoft.com/services/functions/) | &#x2705; | &#x2705; | | | [Azure Health Bot](/healthbot/) | &#x2705; | &#x2705; | | | [Azure HDInsight](https://azure.microsoft.com/services/hdinsight/) | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** |
| [Azure Healthcare APIs](https://azure.microsoft.com/services/healthcare-apis/) (formerly Azure API for FHIR) | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
| [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | &#x2705; | &#x2705; | | | [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | &#x2705; | &#x2705; | | | [Azure Internet Analyzer](https://azure.microsoft.com/services/internet-analyzer/) | &#x2705; | &#x2705; | | | [Azure IoT Central](https://azure.microsoft.com/services/iot-central/) | | | &#x2705; | | [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) | &#x2705; | &#x2705; | |
-| [Azure IoT Security](https://azure.microsoft.com/overview/iot/security/) | &#x2705; | &#x2705; | |
| [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) | &#x2705; | &#x2705; | | | [Azure Lab Services](https://azure.microsoft.com/services/lab-services/) | &#x2705; | &#x2705; | | | [Azure Lighthouse](https://azure.microsoft.com/services/azure-lighthouse/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Marketplace portal](https://azuremarketplace.microsoft.com/) | &#x2705; | &#x2705; | | | [Azure Maps](https://azure.microsoft.com/services/azure-maps/) | &#x2705; | &#x2705; | | | [Azure Media Services](https://azure.microsoft.com/services/media-services/) | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** |
| [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | &#x2705; | &#x2705; | | | [Azure Monitor](https://azure.microsoft.com/services/monitor/) (incl. [Application Insights](../../azure-monitor/app/app-insights-overview.md), [Log Analytics](../../azure-monitor/logs/data-platform-logs.md), and [Application Change Analysis](../../azure-monitor/app/change-analysis.md)) | &#x2705; | &#x2705; | | | [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Security Center](https://azure.microsoft.com/services/security-center/) | &#x2705; | &#x2705; | | | [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) | &#x2705; | &#x2705; | | | [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** |
| [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; | | | [Azure Sentinel](https://azure.microsoft.com/services/azure-sentinel/) (incl. [UEBA](../../sentinel/identify-threats-with-entity-behavior-analytics.md#what-is-user-and-entity-behavior-analytics-ueba)) | &#x2705; | &#x2705; | | | [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure VMware Solution](https://azure.microsoft.com/services/azure-vmware/) | | | &#x2705; | | [Azure Web Application Firewall)](https://azure.microsoft.com/services/web-application-firewall/) | &#x2705; | &#x2705; | | | [Batch](https://azure.microsoft.com/services/batch/) | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** |
| [Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) | &#x2705; | &#x2705; | | | [Cognitive | [Cognitive
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Container Instances](https://azure.microsoft.com/services/container-instances/) | &#x2705; | &#x2705; | | | [Container Registry](https://azure.microsoft.com/services/container-registry/) | &#x2705; | &#x2705; | | | [Content Delivery Network](https://azure.microsoft.com/services/cdn/) | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** |
| [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; | | | [Data Factory](https://azure.microsoft.com/services/data-factory/) | &#x2705; | &#x2705; | | | [Dataverse](/powerapps/maker/common-data-service/data-platform-intro) (formerly Common Data Service) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Event Grid](https://azure.microsoft.com/services/event-grid/) | &#x2705; | &#x2705; | | | [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | &#x2705; | &#x2705; | | | [GitHub AE](https://docs.github.com/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** |
| [GitHub Codespaces](https://visualstudio.microsoft.com/services/github-codespaces/) (formerly Visual Studio Codespaces) | &#x2705; | &#x2705; | | | [Import/Export](https://azure.microsoft.com/services/storage/import-export/) | &#x2705; | &#x2705; | | | [Key Vault](https://azure.microsoft.com/services/key-vault/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Microsoft Intune](/mem/intune/fundamentals/) | &#x2705; | &#x2705; | | | [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | | | [Microsoft Threat Experts](/microsoft-365/security/defender-endpoint/microsoft-threat-experts) | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** |
| [Multifactor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | | | [Network Watcher](https://azure.microsoft.com/services/network-watcher/) incl. [Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; | | | [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [SQL Server Registry](/sql/sql-server/end-of-support/sql-server-extended-security-updates) | &#x2705; | &#x2705; | | | [SQL Server Stretch Database](https://azure.microsoft.com/services/sql-server-stretch-database/) | &#x2705; | &#x2705; | | | [Storage: Blobs](https://azure.microsoft.com/services/storage/blobs/) (incl. [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)) | &#x2705; | &#x2705; | |
-| **Service** | **DoD IL2** | **FedRAMP High** | **Planned 2021** |
+| **Service** | **FedRAMP High** | **DoD IL2** | **Planned 2021** |
| [Storage: Data Movement)](../../storage/common/storage-use-data-movement-library.md) | &#x2705; | &#x2705; | | | [Storage: Disks](https://azure.microsoft.com/services/storage/disks/) (incl. [managed disks](../../virtual-machines/managed-disks-overview.md)) | &#x2705; | &#x2705; | | | [Storage: Files](https://azure.microsoft.com/services/storage/files/) | &#x2705; | &#x2705; | |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
### Terminology used - Azure Government = Azure Government regions US Gov Arizona, US Gov Texas, and US Gov Virginia-- FR High = FedRAMP High Provisional Authorization to Operate (P-ATO) in Azure Government
+- FedRAMP High = FedRAMP High Provisional Authorization to Operate (P-ATO) in Azure Government
- DoD IL2 = DoD SRG Impact Level 2 Provisional Authorization (PA) in Azure Government - DoD IL4 = DoD SRG Impact Level 4 Provisional Authorization (PA) in Azure Government - DoD IL5 = DoD SRG Impact Level 5 Provisional Authorization (PA) in Azure Government - DoD IL6 = DoD SRG Impact Level 6 Provisional Authorization (PA) in Azure Government Secret-- ICD 503 Secret = Intelligence Community Directive 503 Authorization to Operate (ATO) in Azure Government Secret - &#x2705; = service is included in audit scope and has been authorized > [!NOTE]
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
> - Some services deployed in Azure Government regions (US Gov Arizona, US Gov Texas, and US Gov Virginia) require extra configuration to meet DoD IL5 compute and storage isolation requirements, as explained in **[Isolation guidelines for Impact Level 5 workloads](../documentation-government-impact-level-5.md).** > - For DoD IL5 PA compliance scope in Azure Government DoD regions (US DoD Central and US DoD East), see **[Azure Government DoD regions IL5 audit scope](../documentation-government-overview-dod.md#azure-government-dod-regions-il5-audit-scope).**
-| Service | FR High / DoD IL2 | DoD IL4 | DoD IL5 | DoD IL6 | ICD 503 Secret |
-| - |:--:|:-:|:-:|:-:|:--:|
-| [API Management](https://azure.microsoft.com/services/api-management/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [App Configuration](https://azure.microsoft.com/services/app-configuration/) | &#x2705; | | | | |
+| Service | FedRAMP High | DoD IL2 | DoD IL4 | DoD IL5 | DoD IL6 |
+| - |::|:-:|:-:|:-:|:-:|
+| [API Management](https://azure.microsoft.com/services/api-management/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [App Configuration](https://azure.microsoft.com/services/app-configuration/) | &#x2705; | &#x2705; | | | |
| [Application Gateway](https://azure.microsoft.com/services/application-gateway/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Automation](https://azure.microsoft.com/services/automation/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Automation](https://azure.microsoft.com/services/automation/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Active Directory (Free and Basic)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Active Directory (Premium P1 + P2)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Active Directory Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Advisor](https://azure.microsoft.com/services/advisor/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Archive Storage](https://azure.microsoft.com/services/storage/archive/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Active Directory (Premium P1 + P2)](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Active Directory Domain Services](https://azure.microsoft.com/services/active-directory-ds/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Advisor](https://azure.microsoft.com/services/advisor/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Analysis Services](https://azure.microsoft.com/services/analysis-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Archive Storage](https://azure.microsoft.com/services/storage/archive/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Backup](https://azure.microsoft.com/services/backup/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Bastion](https://azure.microsoft.com/services/azure-bastion/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Blueprints](https://azure.microsoft.com/services/blueprints/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Bastion](https://azure.microsoft.com/services/azure-bastion/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Blueprints](https://azure.microsoft.com/services/blueprints/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Bot Service](/azure/bot-service/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Cache for Redis](https://azure.microsoft.com/services/cache/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Azure Cloud Services](https://azure.microsoft.com/services/cloud-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Cognitive Search](https://azure.microsoft.com/services/search/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Cognitive Search](https://azure.microsoft.com/services/search/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Cosmos DB](https://azure.microsoft.com/services/cosmos-db/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Cost Management and Billing](https://azure.microsoft.com/services/cost-management/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Data Box](https://azure.microsoft.com/services/databox/) **&ast;** | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Cost Management and Billing](https://azure.microsoft.com/services/cost-management/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Data Box](https://azure.microsoft.com/services/databox/) **&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
| [Azure Data Explorer](https://azure.microsoft.com/services/data-explorer/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Data Share](https://azure.microsoft.com/services/data-share/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Database for MariaDB](https://azure.microsoft.com/services/mariadb/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Database for MySQL](https://azure.microsoft.com/services/mysql/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Databricks](https://azure.microsoft.com/services/databricks/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Data Share](https://azure.microsoft.com/services/data-share/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Database for MariaDB](https://azure.microsoft.com/services/mariadb/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Database for MySQL](https://azure.microsoft.com/services/mysql/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Database for PostgreSQL](https://azure.microsoft.com/services/postgresql/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Database Migration Service](https://azure.microsoft.com/services/database-migration/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Databricks](https://azure.microsoft.com/services/databricks/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure DDoS Protection](https://azure.microsoft.com/services/ddos-protection/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Dedicated HSM](https://azure.microsoft.com/services/azure-dedicated-hsm/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab/) | &#x2705; | &#x2705; | &#x2705; | | |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
+| [Azure Defender for IoT](https://azure.microsoft.com/services/azure-defender-for-iot/) (formerly Azure Security for IoT) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure DevTest Labs](https://azure.microsoft.com/services/devtest-lab/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure DNS](https://azure.microsoft.com/services/dns/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure ExpressRoute](https://azure.microsoft.com/services/expressroute/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure File Sync](../../storage/file-sync/file-sync-introduction.md) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Form Recognizer](https://azure.microsoft.com/services/form-recognizer/) | &#x2705; | | | | |
-| [Azure Front Door](https://azure.microsoft.com/services/frontdoor/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure File Sync](../../storage/file-sync/file-sync-introduction.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Firewall](https://azure.microsoft.com/services/azure-firewall/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Form Recognizer](https://azure.microsoft.com/services/form-recognizer/) | &#x2705; | &#x2705; | | | |
+| [Azure Front Door](https://azure.microsoft.com/services/frontdoor/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Functions](https://azure.microsoft.com/services/functions/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure HDInsight](https://azure.microsoft.com/services/hdinsight/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Healthcare APIs](https://azure.microsoft.com/services/healthcare-apis/) (formerly Azure API for FHIR) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure IoT Security](https://azure.microsoft.com/overview/iot/security/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Lab Services](https://azure.microsoft.com/services/lab-services/) | &#x2705; | &#x2705; | &#x2705; | | |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
-| [Azure Lighthouse](https://azure.microsoft.com/services/azure-lighthouse/)| &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure HDInsight](https://azure.microsoft.com/services/hdinsight/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Healthcare APIs](https://azure.microsoft.com/services/healthcare-apis/) (formerly Azure API for FHIR) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure HPC Cache](https://azure.microsoft.com/services/hpc-cache/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Information Protection](https://azure.microsoft.com/services/information-protection/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure IoT Hub](https://azure.microsoft.com/services/iot-hub/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Lab Services](https://azure.microsoft.com/services/lab-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Lighthouse](https://azure.microsoft.com/services/azure-lighthouse/)| &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Managed Applications](https://azure.microsoft.com/services/managed-applications/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Maps](https://azure.microsoft.com/services/azure-maps/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Media Services](https://azure.microsoft.com/services/media-services/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Monitor](https://azure.microsoft.com/services/monitor/) (incl. [Log Analytics](../../azure-monitor/logs/data-platform-logs.md)) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Peering Service](../../peering-service/about.md) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Policy](https://azure.microsoft.com/services/azure-policy/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Policy Guest Configuration](../../governance/policy/concepts/guest-configuration.md) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Public IP](../../virtual-network/public-ip-addresses.md) | &#x2705; | | | | |
-| [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Managed Applications](https://azure.microsoft.com/services/managed-applications/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Maps](https://azure.microsoft.com/services/azure-maps/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Media Services](https://azure.microsoft.com/services/media-services/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Monitor](https://azure.microsoft.com/services/monitor/) (incl. [Log Analytics](../../azure-monitor/logs/data-platform-logs.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Peering Service](../../peering-service/about.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Policy](https://azure.microsoft.com/services/azure-policy/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Policy Guest Configuration](../../governance/policy/concepts/guest-configuration.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Public IP](../../virtual-network/public-ip-addresses.md) | &#x2705; | &#x2705; | | | |
+| [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
-| [Azure Scheduler](../../scheduler/scheduler-intro.md) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Security Center](https://azure.microsoft.com/services/security-center/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Sentinel](https://azure.microsoft.com/services/azure-sentinel/) | &#x2705; | &#x2705; | &#x2705; | | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Scheduler](../../scheduler/scheduler-intro.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Security Center](https://azure.microsoft.com/services/security-center/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Sentinel](https://azure.microsoft.com/services/azure-sentinel/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure SQL Database](https://azure.microsoft.com/services/sql-database/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure SQL Managed Instance](https://azure.microsoft.com/products/azure-sql/managed-instance/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Stack Bridge](/azure-stack/operator/azure-stack-usage-reporting) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | &#x2705; | | | | |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
-| [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Azure Web Application Firewall)](https://azure.microsoft.com/services/web-application-firewall/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Azure SQL Managed Instance](https://azure.microsoft.com/products/azure-sql/managed-instance/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Stack Bridge](/azure-stack/operator/azure-stack-usage-reporting) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure Stack Edge](https://azure.microsoft.com/products/azure-stack/edge/) (formerly Data Box Edge) **&ast;** | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Synapse Link for Dataverse](/powerapps/maker/data-platform/export-to-data-lake) | &#x2705; | &#x2705; | | | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Azure Virtual Desktop](https://azure.microsoft.com/services/virtual-desktop/) (formerly Windows Virtual Desktop) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Web Application Firewall)](https://azure.microsoft.com/services/web-application-firewall/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Batch](https://azure.microsoft.com/services/batch/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Cognitive
-| [Container Instances](https://azure.microsoft.com/services/container-instances/)| &#x2705; | &#x2705; | &#x2705; | | |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
+| [Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Cognitive
+| [Container Instances](https://azure.microsoft.com/services/container-instances/)| &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Container Registry](https://azure.microsoft.com/services/container-registry/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Content Delivery Network](https://azure.microsoft.com/services/cdn/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Data Factory](https://azure.microsoft.com/services/data-factory/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Dataverse](/powerapps/maker/common-data-service/data-platform-intro) (formerly Common Data Service) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Dynamics 365 Customer Insights](/dynamics365/customer-insights/audience-insights/overview) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Dynamics 365 Customer Voice](/dynamics365/customer-voice/about) (formerly Forms Pro) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Dynamics 365 Customer Service](/dynamics365/customer-service/overview) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Dynamics 365 Field Service](/dynamics365/field-service/overview) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Dynamics 365 Project Service Automation](/dynamics365/project-operations/psa/overview) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Dynamics 365 Sales](https://dynamics.microsoft.com/sales/overview/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Event Grid](https://azure.microsoft.com/services/event-grid/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Content Delivery Network](https://azure.microsoft.com/services/cdn/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Customer Lockbox](../../security/fundamentals/customer-lockbox-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Data Factory](https://azure.microsoft.com/services/data-factory/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Dataverse](/powerapps/maker/common-data-service/data-platform-intro) (formerly Common Data Service) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Dynamics 365 Chat (Omnichannel Engagement Hub)](/dynamics365/omnichannel/introduction-omnichannel) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Dynamics 365 Customer Insights](/dynamics365/customer-insights/audience-insights/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Dynamics 365 Customer Voice](/dynamics365/customer-voice/about) (formerly Forms Pro) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Dynamics 365 Customer Service](/dynamics365/customer-service/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Dynamics 365 Field Service](/dynamics365/field-service/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Dynamics 365 Project Service Automation](/dynamics365/project-operations/psa/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Dynamics 365 Sales](https://dynamics.microsoft.com/sales/overview/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Event Grid](https://azure.microsoft.com/services/event-grid/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Event Hubs](https://azure.microsoft.com/services/event-hubs/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | | | | |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
-| [Import/Export](https://azure.microsoft.com/services/storage/import-export/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [GitHub AE](https://docs.github.com/en/github-ae@latest/admin/overview/about-github-ae) | &#x2705; | &#x2705; | | | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Import/Export](https://azure.microsoft.com/services/storage/import-export/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Key Vault](https://azure.microsoft.com/services/key-vault/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Load Balancer](https://azure.microsoft.com/services/load-balancer/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | | | | |
+| [Microsoft 365 Defender](/microsoft-365/security/defender/) (formerly Microsoft Threat Protection) | &#x2705; | &#x2705; | | | |
| [Microsoft Azure portal](https://azure.microsoft.com/features/azure-portal/) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | &#x2705; | | [Microsoft Azure Government portal](../documentation-government-get-started-connect-with-portal.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; | &#x2705; |
-| [Microsoft Cloud App Security](/cloud-app-security/what-is-cloud-app-security)| &#x2705; | &#x2705; | &#x2705; | | |
-| [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) (formerly Microsoft Defender Advanced Threat Protection) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Microsoft Defender for Identity](/defender-for-identity/what-is) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Microsoft Cloud App Security](/cloud-app-security/what-is-cloud-app-security)| &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) (formerly Microsoft Defender Advanced Threat Protection) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Microsoft Defender for Identity](/defender-for-identity/what-is) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Microsoft Graph](/graph/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Microsoft Intune](/mem/intune/fundamentals/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Microsoft Intune](/mem/intune/fundamentals/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Microsoft Stream](/stream/overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Multifactor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Network Watcher](https://azure.microsoft.com/services/network-watcher/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Network Watcher Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; | &#x2705; | | |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
-| [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Planned Maintenance for VMs](../../virtual-machines/maintenance-control-portal.md) | &#x2705; | | | | |
-| [Power Apps](/powerapps/powerapps-overview) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Power Automate](/power-automate/getting-started) (formerly Microsoft Flow) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Power BI](https://powerbi.microsoft.com/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Power BI Embedded](https://azure.microsoft.com/services/power-bi-embedded/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Power Data Integrator](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Power Query Online](https://powerquery.microsoft.com/) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Power Virtual Agents](/power-virtual-agents/fundamentals-what-is-power-virtual-agents) | &#x2705; | | | | |
-| [Private Link](https://azure.microsoft.com/services/private-link/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [Network Watcher Traffic Analytics](../../network-watcher/traffic-analytics.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
+| [Notification Hubs](https://azure.microsoft.com/services/notification-hubs/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Planned Maintenance for VMs](../../virtual-machines/maintenance-control-portal.md) | &#x2705; | &#x2705; | | | |
+| [Power Apps](/powerapps/powerapps-overview) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Power Automate](/power-automate/getting-started) (formerly Microsoft Flow) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Power BI](https://powerbi.microsoft.com/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Power BI Embedded](https://azure.microsoft.com/services/power-bi-embedded/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Power Data Integrator](/power-platform/admin/data-integrator) (formerly Dynamics 365 Integrator App) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Power Query Online](https://powerquery.microsoft.com/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Power Virtual Agents](/power-virtual-agents/fundamentals-what-is-power-virtual-agents) | &#x2705; | &#x2705; | | | |
+| [Private Link](https://azure.microsoft.com/services/private-link/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Service Bus](https://azure.microsoft.com/services/service-bus/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [SQL Server Stretch Database](https://azure.microsoft.com/services/sql-server-stretch-database/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [SQL Server Stretch Database](https://azure.microsoft.com/services/sql-server-stretch-database/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Storage: Blobs](https://azure.microsoft.com/services/storage/blobs/) (incl. [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Storage: Disks](https://azure.microsoft.com/services/storage/disks/) (incl. [managed disks](../../virtual-machines/managed-disks-overview.md)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Storage: Files](https://azure.microsoft.com/services/storage/files/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| **Service** | **FR High / DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | **ICD 503 Secret** |
+| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** |
| [Storage: Queues](https://azure.microsoft.com/services/storage/queues/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Storage: Tables](https://azure.microsoft.com/services/storage/tables/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [StorSimple](https://azure.microsoft.com/services/storsimple/) | &#x2705; | &#x2705; | &#x2705; | | |
+| [StorSimple](https://azure.microsoft.com/services/storsimple/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Traffic Manager](https://azure.microsoft.com/services/traffic-manager/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Virtual Machine Scale Sets](https://azure.microsoft.com/services/virtual-machine-scale-sets/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Virtual Machines](https://azure.microsoft.com/services/virtual-machines/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Virtual Network](https://azure.microsoft.com/services/virtual-network/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md) | &#x2705; | &#x2705; | &#x2705; | | |
-| [Virtual WAN](https://azure.microsoft.com/services/virtual-wan/) | &#x2705; | | | &#x2705; | &#x2705; |
+| [Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Virtual WAN](https://azure.microsoft.com/services/virtual-wan/) | &#x2705; | &#x2705; | | | &#x2705; |
| [VPN Gateway](https://azure.microsoft.com/services/vpn-gateway/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Web Apps (App Service)](https://azure.microsoft.com/services/app-service/web/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/alerts/action-groups.md
Last updated 05/28/2021
# Create and manage action groups in the Azure portal
-An action group is a collection of notification preferences defined by the owner of an Azure subscription. Azure Monitor and Service Health alerts use action groups to notify users that an alert has been triggered. Various alerts may use the same action group or different action groups depending on the user's requirements.
+An action group is a collection of notification preferences defined by the owner of an Azure subscription. Azure Monitor, Service Health and Azure Advisor alerts use action groups to notify users that an alert has been triggered. Various alerts may use the same action group or different action groups depending on the user's requirements.
This article shows you how to create and manage action groups in the Azure portal.
azure-monitor Annotations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/annotations.md
You can use the CreateReleaseAnnotation PowerShell script to create annotations
} $body = (ConvertTo-Json $annotation -Compress) -replace '(\\+)"', '$1$1"' -replace "`"", "`"`""- az rest --method put --uri "$($aiResourceId)/Annotations?api-version=2015-05-01" --body "$($body) "+
+ # Use the following command for Linux Azure DevOps Hosts or other PowerShell scenarios
+ # Invoke-AzRestMethod -Path "$aiResourceId/Annotations?api-version=2015-05-01" -Method PUT -Payload $body
``` 3. Call the PowerShell script with the following code, replacing the angle-bracketed placeholders with your values. The -releaseProperties are optional.
azure-monitor Profiler Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/profiler-troubleshooting.md
It will display a Status Page similar like the below:
### Manual installation
-When you configure Profiler, updates are made to the web app's settings. If your environment requires it, you can apply the updates manually. An example might be that your application is running in a Web Apps environment for PowerApps. To apply updates manually:
+When you configure Profiler, updates are made to the web app's settings. If your environment requires it, you can apply the updates manually. An example might be that your application is running in a Web Apps environment for Power Apps. To apply updates manually:
1. In the **Web App Control** pane, open **Settings**.
azure-monitor Monitor Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/monitor-reference.md
The following table lists Azure services and the data they collect into Azure Mo
|Maps | No | No | No | | |Media Services | Yes | Yes | No | | |Microsoft Managed Desktop | No | No | No | |
-|Microsoft PowerApps | No | No | No | |
+|Microsoft Power Apps | No | No | No | |
|Microsoft Social Engagement | No | No | No | | |Microsoft Stream | Yes | Yes | No | | |Migrate | No | No | No | |
azure-netapp-files Azure Netapp Files Delegate Subnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azure-netapp-files-delegate-subnet.md
na ms.devlang: na Previously updated : 09/28/2020 Last updated : 08/25/2021 # Delegate a subnet to Azure NetApp Files
You must delegate a subnet to Azure NetApp Files. When you create a volume, yo
## Considerations
-* The wizard for creating a new subnet defaults to a /24 network mask, which provides for 251 available IP addresses. Using a /28 network mask, which provides for 11 usable IP addresses, is sufficient for the service.
-* In each Azure Virtual Network (VNet), only one subnet can be delegated to Azure NetApp Files.
+* The wizard for creating a new subnet defaults to a /24 network mask, which provides for 251 available IP addresses. Using a /28 network mask, which provides for 11 usable IP addresses, is sufficient for most use cases. You should consider a larger subnet (for example, /26 network mask) in scenarios such as SAP HANA where many volumes and storage endpoints are anticipated. You can also stay with the default network mask /24 as proposed by the wizard if you don't need to reserve many client or VM IP addresses in your Azure Virtual Network (VNet). Note that the network mask of the delegated network cannot be changed after the initial creation.
+* In each VNet, only one subnet can be delegated to Azure NetApp Files.
Azure enables you to create multiple delegated subnets in a VNet. However, any attempts to create a new volume will fail if you use more than one delegated subnet. You can have only a single delegated subnet in a VNet. A NetApp account can deploy volumes into multiple VNets, each having its own delegated subnet. * You cannot designate a network security group or service endpoint in the delegated subnet. Doing so causes the subnet delegation to fail.
You can also create and delegate a subnet when you [create a volume for Azure Ne
## Next steps * [Create a volume for Azure NetApp Files](azure-netapp-files-create-volumes.md)
-* [Learn about virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md)
+* [Learn about virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md)
azure-sql Active Geo Replication Configure Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/active-geo-replication-configure-portal.md
Title: "Tutorial: Geo-replication & failover in portal"
-description: Configure geo-replication for a database using the Azure portal and initiate failover.
+description: Learn how to configure geo-replication for an SQL database using the Azure portal or Azure CLI, and initiate failover.
Previously updated : 02/13/2019 Last updated : 08/20/2021
-# Tutorial: Configure active geo-replication and failover in the Azure portal (Azure SQL Database)
+# Tutorial: Configure active geo-replication and failover (Azure SQL Database)
[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-This article shows you how to configure [active geo-replication for Azure SQL Database](active-geo-replication-overview.md#active-geo-replication-terminology-and-capabilities) using the [Azure portal](https://portal.azure.com) and to initiate failover.
+This article shows you how to configure [active geo-replication for Azure SQL Database](active-geo-replication-overview.md#active-geo-replication-terminology-and-capabilities) using the [Azure portal](https://portal.azure.com) or Azure CLI and to initiate failover.
For best practices using auto-failover groups, see [Best practices for Azure SQL Database](auto-failover-group-overview.md#best-practices-for-sql-database) and [Best practices for Azure SQL Managed Instance](auto-failover-group-overview.md#best-practices-for-sql-managed-instance).
For best practices using auto-failover groups, see [Best practices for Azure SQL
## Prerequisites
+# [Portal](#tab/portal)
+ To configure active geo-replication by using the Azure portal, you need the following resource: * A database in Azure SQL Database: The primary database that you want to replicate to a different geographical region.
To configure active geo-replication by using the Azure portal, you need the foll
> [!Note] > When using Azure portal, you can only create a secondary database within the same subscription as the primary. If a secondary database is required to be in a different subscription, use [Create Database REST API](/rest/api/sql/databases/createorupdate) or [ALTER DATABASE Transact-SQL API](/sql/t-sql/statements/alter-database-transact-sql).
+# [Azure CLI](#tab/azure-cli)
+
+To configure active geo-replication, you need a database in Azure SQL Database. It's the primary database that you want to replicate to a different geographical region.
+
+Prepare your environment for the Azure CLI.
++++ ## Add a secondary database The following steps create a new secondary database in a geo-replication partnership.
The secondary database has the same name as the primary database and has, by def
After the secondary is created and seeded, data begins replicating from the primary database to the new secondary database. > [!NOTE]
-> If the partner database already exists (for example, as a result of terminating a previous geo-replication relationship) the command fails.
+> If the partner database already exists, (for example, as a result of terminating a previous geo-replication relationship) the command fails.
+
+# [Portal](#tab/portal)
1. In the [Azure portal](https://portal.azure.com), browse to the database that you want to set up for geo-replication.
-2. On the SQL Database page, select **geo-replication**, and then select the region to create the secondary database. You can select any region other than the region hosting the primary database, but we recommend the [paired region](../../best-practices-availability-paired-regions.md).
+2. On the SQL Database page, select your database, scroll to **Data management**, select **Replicas**, and then select **Create replica**.
+
+ :::image type="content" source="./media/active-geo-replication-configure-portal/azure-cli-create-geo-replica.png" alt-text="Configure geo-replication":::
+
+3. Select or create the server for the secondary database, and configure the **Compute + storage** options if necessary. You can select any region for your secondary server, but we recommend the [paired region](../../best-practices-availability-paired-regions.md).
+
+ :::image type="content" source="./media/active-geo-replication-configure-portal/azure-portal-create-and-configure-replica.png" alt-text="{alt-text}":::
+
+ Optionally, you can add a secondary database to an elastic pool. To create the secondary database in a pool, select **Yes** next to **Want to use SQL elastic pool?** and select a pool on the target server. A pool must already exist on the target server. This workflow doesn't create a pool.
+
+4. Click **Review + create**, review the information, and then click **Create**.
+5. The secondary database is created and the deployment process begins.
+
+ :::image type="content" source="./media/active-geo-replication-configure-portal/azure-portal-geo-replica-deployment.png" alt-text="Screenshot that shows the deployment status of the secondary database.":::
+
+6. When the deployment is complete, the secondary database displays its status.
+
+ :::image type="content" source="./media/active-geo-replication-configure-portal/azure-portal-sql-database-secondary-status.png" alt-text="Screenshot that shows the secondary database status after deployment.":::
+
+7. Return to the primary database page, and then select **Replicas**. Your secondary database is listed under **Geo replicas**.
+
+ :::image type="content" source="./media/active-geo-replication-configure-portal/azure-sql-db-geo-replica-list.png" alt-text="Screenshot that shows the SQL database primary and geo replicas.":::
+
+# [Azure CLI](#tab/azure-cli)
- ![Configure geo-replication](./media/active-geo-replication-configure-portal/configure-geo-replication.png)
-3. Select or configure the server and pricing tier for the secondary database.
+Select the database you want to set up for geo-replication. You'll need the following information:
+- Your original Azure SQL database name.
+- The Azure SQL server name.
+- Your resource group name.
+- The name of the server to create the new replica in.
- ![create secondary form](./media/active-geo-replication-configure-portal/create-secondary.png)
-4. Optionally, you can add a secondary database to an elastic pool. To create the secondary database in a pool, click **elastic pool** and select a pool on the target server. A pool must already exist on the target server. This workflow does not create a pool.
-5. Click **Create** to add the secondary.
-6. The secondary database is created and the seeding process begins.
+> [!NOTE]
+> The secondary database must have the same service tier as the primary.
+
+You can select any region for your secondary server, but we recommend the [paired region](../../best-practices-availability-paired-regions.md).
+
+Run the [az sql db replica create](/cli/azure/sql/db/replica#az_sql_db_replica_create) command.
+
+```azurecli
+az sql db replica create --resource-group ContosoHotel --server contosoeast --name guestlist --partner-server contosowest --family Gen5 --capacity 2 --secondary-type Geo
+```
+
+Optionally, you can add a secondary database to an elastic pool. To create the secondary database in a pool, use the `--elastic-pool` parameter. A pool must already exist on the target server. This workflow doesn't create a pool.
- ![secondaries map](./media/active-geo-replication-configure-portal/seeding0.png)
-7. When the seeding process is complete, the secondary database displays its status.
+The secondary database is created and the deployment process begins.
- ![Seeding complete](./media/active-geo-replication-configure-portal/seeding-complete.png)
+When the deployment is complete, you can check the status of the secondary database by running the [az sql db replica list-links](/cli/azure/sql/db/replica#az_sql_db_replica_list-links) command:
+
+```azurecli
+az sql db replica list-links --name guestlist --resource-group ContosoHotel --server contosowest
+```
++ ## Initiate a failover
-The secondary database can be switched to become the primary.
+The secondary database can be switched to become the primary.
+
+# [Portal](#tab/portal)
1. In the [Azure portal](https://portal.azure.com), browse to the primary database in the geo-replication partnership.
-2. On the SQL Database blade, select **All settings** > **geo-replication**.
-3. In the **SECONDARIES** list, select the database you want to become the new primary and click **Forced Failover**.
+2. Scroll to **Data management**, and then select **Replicas**.
+3. In the **Geo replicas** list, select the database you want to become the new primary, select the ellipsis, and then select **Forced failover**.
- ![failover](./media/active-geo-replication-configure-portal/secondaries.png)
-4. Click **Yes** to begin the failover.
+ :::image type="content" source="./media/active-geo-replication-configure-portal/azure-portal-select-forced-failover.png" alt-text="Screenshot that shows selecting forced failover from the drop-down.":::
+4. Select **Yes** to begin the failover.
-The command immediately switches the secondary database into the primary role. This process normally should complete within 30 sec or less.
+# [Azure CLI](#tab/azure-cli)
-There is a short period during which both databases are unavailable (on the order of 0 to 25 seconds) while the roles are switched. If the primary database has multiple secondary databases, the command automatically reconfigures the other secondaries to connect to the new primary. The entire operation should take less than a minute to complete under normal circumstances.
+Run the [az sql db replica set-primary](/cli/azure/sql/db/replica#az_sql_db_replica_set-primary) command.
+
+```azurecli
+az sql db replica set-primary --name guestlist --resource-group ContosoHotel --server contosowest
+```
+++
+The command immediately switches the secondary database into the primary role. This process normally should complete within 30 seconds or less.
+
+There's a short period during which both databases are unavailable, on the order of 0 to 25 seconds, while the roles are switched. If the primary database has multiple secondary databases, the command automatically reconfigures the other secondaries to connect to the new primary. The entire operation should take less than a minute to complete under normal circumstances.
> [!NOTE]
-> This command is designed for quick recovery of the database in case of an outage. It triggers failover without data synchronization (forced failover). If the primary is online and committing transactions when the command is issued some data loss may occur.
+> This command is designed for quick recovery of the database in case of an outage. It triggers failover without data synchronization, or forced failover. If the primary is online and committing transactions when the command is issued some data loss may occur.
## Remove secondary database
-This operation permanently terminates the replication to the secondary database, and changes the role of the secondary to a regular read-write database. If the connectivity to the secondary database is broken, the command succeeds but the secondary does not become read-write until after connectivity is restored.
+This operation permanently stops the replication to the secondary database, and changes the role of the secondary to a regular read-write database. If the connectivity to the secondary database is broken, the command succeeds but the secondary doesn't become read-write until after connectivity is restored.
+
+# [Portal](#tab/portal)
1. In the [Azure portal](https://portal.azure.com), browse to the primary database in the geo-replication partnership.
-2. On the SQL database page, select **geo-replication**.
-3. In the **SECONDARIES** list, select the database you want to remove from the geo-replication partnership.
-4. Click **Stop Replication**.
+2. Select **Replicas**.
+3. In the **Geo replicas** list, select the database you want to remove from the geo-replication partnership, select the ellipsis, and then select **Stop replication**.
- ![Remove secondary](./media/active-geo-replication-configure-portal/remove-secondary.png)
+ :::image type="content" source="./media/active-geo-replication-configure-portal/azure-portal-select-stop-replication.png" alt-text="Screenshot that shows selecting stop replication from the drop-down.":::
5. A confirmation window opens. Click **Yes** to remove the database from the geo-replication partnership. (Set it to a read-write database not part of any replication.)
+
+# [Azure CLI](#tab/azure-cli)
+
+Run the [az sql db replica delete-link](/cli/azure/sql/db/replica#az_sql_db_replica_delete-link) command.
+
+```azurecli
+az sql db replica delete-link --name guestlist --resource-group ContosoHotel --server contosoeast --partner-server contosowest
+```
+
+Confirm that you want to perform the operation.
++ ## Next steps
azure-sql Auditing Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/auditing-overview.md
--++ Previously updated : 08/01/2021 Last updated : 08/25/2021 # Auditing for Azure SQL Database and Azure Synapse Analytics
For a script example, see [Configure auditing and threat detection using PowerSh
**REST API**: -- [Create or Update Database Auditing Policy](/rest/api/sql/2017-03-01-preview/server-auditing-settings/create-or-update)-- [Create or Update Server Auditing Policy](/rest/api/sql/server%20auditing%20settings/createorupdate)
+- [Create or Update Database Auditing Policy](/rest/api/sql/database%20auditing%20settings/createorupdate)
+- [Create or Update Server Auditing Policy](/rest/api/sql/2017-03-01-preview/server-auditing-settings/create-or-update)
- [Get Database Auditing Policy](/rest/api/sql/database%20auditing%20settings/get)-- [Get Server Auditing Policy](/rest/api/sql/2017-03-01-preview/server-auditing-settings/get)
+- [Get Server Auditing Policy](/rest/api/sql/2017-03-01-preview/server-auditing-settings/get)
Extended policy with WHERE clause support for additional filtering:
azure-sql Automatic Tuning Email Notifications Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/automatic-tuning-email-notifications-configure.md
Azure SQL Database automatic tuning recommendations can be viewed in the [Azure
## Automate email notifications for automatic tuning recommendations
-The following solution automates the sending of email notifications containing automatic tuning recommendations. The solution described consists of automating execution of a PowerShell script for retrieving tuning recommendations using [Azure Automation](../../automation/automation-intro.md), and automation of scheduling email delivery job using [Microsoft Flow](https://flow.microsoft.com).
+The following solution automates the sending of email notifications containing automatic tuning recommendations. The solution described consists of automating execution of a PowerShell script for retrieving tuning recommendations using [Azure Automation](../../automation/automation-intro.md), and automation of scheduling email delivery job using [Microsoft Power Automate](https://flow.microsoft.com).
## Create Azure Automation account
Ensure to adjust the content by customizing the PowerShell script to your needs.
With the above steps, the PowerShell script to retrieve automatic tuning recommendations is loaded in Azure Automation. The next step is to automate and schedule the email delivery job.
-## Automate the email jobs with Microsoft Flow
+## Automate the email jobs with Microsoft Power Automate
-To complete the solution, as the final step, create an automation flow in Microsoft Flow consisting of three actions (jobs):
+To complete the solution, as the final step, create an automation flow in Microsoft Power Automate consisting of three actions (jobs):
- "**Azure Automation - Create job**" ΓÇô used to execute the PowerShell script to retrieve automatic tuning recommendations inside the Azure Automation runbook. - "**Azure Automation - Get job output**" ΓÇô used to retrieve output from the executed PowerShell script. - "**Office 365 Outlook ΓÇô Send an email**" ΓÇô used to send out email. E-mails are sent out using the work or school account of the individual creating the flow.
-To learn more about Microsoft Flow capabilities, see [Getting started with Microsoft Flow](/flow/getting-started).
+To learn more about Microsoft Power Automate capabilities, see [Getting started with Microsoft Power Automate](/power-automate/getting-started).
-Prerequisite for this step is to sign up for a [Microsoft Flow](https://flow.microsoft.com) account and to log in. Once inside the solution, follow these steps to set up a **new flow**:
+Prerequisite for this step is to sign up for a [Microsoft Power Automate](https://flow.microsoft.com) account and to log in. Once inside the solution, follow these steps to set up a **new flow**:
1. Access "**My flows**" menu item. 1. Inside My flows, select the "**+Create from blank**" link at the top of the page.
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale.md
ALTER DATABASE [DB2] MODIFY (EDITION = 'Hyperscale', SERVICE_OBJECTIVE = 'HS_Gen
GO ```
+> [!NOTE]
+> To move a database that is a part of a [geo-replication](active-geo-replication-overview.md) relationship, either as the primary or as a secondary, to Hyperscale, you have to stop replication. Databases in a [failover group](auto-failover-group-overview.md) must be removed from the group first.
+>
+> Once a database has been moved to Hyperscale, you can create a new Hyperscale geo-replica for that database. Geo-replication for Hyperscale is in preview with certain [limitations](active-geo-replication-overview.md).
+ ## Database high availability in Hyperscale As in all other service tiers, Hyperscale guarantees data durability for committed transactions regardless of compute replica availability. The extent of downtime due to the primary replica becoming unavailable depends on the type of failover (planned vs. unplanned), and on the presence of at least one high-availability replica. In a planned failover (i.e. a maintenance event), the system either creates the new primary replica before initiating a failover, or uses an existing high-availability replica as the failover target. In an unplanned failover (i.e. a hardware failure on the primary replica), the system uses a high-availability replica as a failover target if one exists, or creates a new primary replica from the pool of available compute capacity. In the latter case, downtime duration is longer due to extra steps required to create the new primary replica.
azure-sql Replication Transactional Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/replication-transactional-overview.md
Azure SQL Managed Instance can support being a Subscriber from the following ver
- SQL Server 2016 and later - SQL Server 2014 [RTM CU10 (12.0.4427.24)](https://support.microsoft.com/help/3094220/cumulative-update-10-for-sql-server-2014) or [SP1 CU3 (12.0.2556.4)](https://support.microsoft.com/help/3094221/cumulative-update-3-for-sql-server-2014-service-pack-1)-- SQL Server 2012 [SP2 CU8 (11.0.5634.1)](https://support.microsoft.com/help/3082561/cumulative-update-8-for-sql-server-2012-sp2) or [SP3 (11.0.6020.0)](https://www.microsoft.com/download/details.aspx?id=49996)
+- SQL Server 2012 [SP2 CU8 (11.0.5634.1)](https://support.microsoft.com/help/3082561/cumulative-update-8-for-sql-server-2012-sp2) or [SP3 (11.0.6020.0)](https://www.microsoft.com/download/details.aspx?id=49996) or [SP4 (11.0.7001.0)](https://www.microsoft.com/download/details.aspx?id=56040)
> [!NOTE] >
azure-sql Sql Server To Sql Database Assessment Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/migration-guides/database/sql-server-to-sql-database-assessment-rules.md
Server Audits is not supported in Azure SQL Database.
**Recommendation**
-Consider Azure SQL Database audit features to replace Server Audits. Azure SQL supports audit and the features are richer than SQL Server. Azure SQL database can audit various database actions and events, including: Access to data, Schema changes (DDL), Data changes (DML), Accounts, roles, and permissions (DCL, Security exceptions. Azure SQL Database Auditing increases an organization's ability to gain deep insight into events and changes that occur within their database, including updates and queries against the data. Alternatively migrate to Azure SQL Managed Instance or SQL Server on Azure Virtual Machine.
+Consider Azure SQL Database audit features to replace Server Audits. Azure SQL supports audit and the features are richer than SQL Server. Azure SQL Database can audit various database actions and events, including: Access to data, Schema changes (DDL), Data changes (DML), Accounts, roles, and permissions (DCL, Security exceptions. Azure SQL Database Auditing increases an organization's ability to gain deep insight into events and changes that occur within their database, including updates and queries against the data. Alternatively migrate to Azure SQL Managed Instance or SQL Server on Azure Virtual Machine.
More information: [Auditing for Azure SQL Database ](../../database/auditing-overview.md)
More information: [Discontinued Database Engine functionality in SQL Server](/pr
**Category**: Warning **Description**
-Following unsupported system and extended stored procedures cannot be used in Azure SQL database - `sp_dboption`, `sp_addserver`, `sp_dropalias`,`sp_activedirectory_obj`, `sp_activedirectory_scp`, `sp_activedirectory_start`.
+Following unsupported system and extended stored procedures cannot be used in Azure SQL Database - `sp_dboption`, `sp_addserver`, `sp_dropalias`,`sp_activedirectory_obj`, `sp_activedirectory_scp`, `sp_activedirectory_start`.
**Recommendation**
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/concepts-private-clouds-clusters.md
The diagram shows a single Azure subscription with two private clouds that repre
## Host maintenance and lifecycle management
-One benefit of Azure VMware Solution private clouds is the platform is maintained for you. Microsoft is responsible for the lifecycle management of VMware software (ESXi, vCenter, and vSAN). Microsoft is also responsible for the lifecycle management of NSX-T appliances, bootstrapping the network configuration, such as creating the Tier-0 gateway and enabling North-South routing. You're responsible for NSX-T SDN configuration: network segments, distributed firewall rules, Tier 1 gateways, and load balancers.
+ [!INCLUDE [vmware-software-update-frequency](includes/vmware-software-update-frequency.md)]
azure-vmware Rotate Cloudadmin Credentials https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/rotate-cloudadmin-credentials.md
Title: Rotate the cloudadmin credentials for Azure VMware Solution
-description: Learn how to rotate the vCenter Server and NSX-T Manager credentials for your Azure VMware Solution private cloud.
+description: Learn how to rotate the vCenter Server credentials for your Azure VMware Solution private cloud.
Previously updated : 06/01/2021 Last updated : 08/25/2021
-#Customer intent: As an Azure service administrator, I want to rotate my cloudadmin credentials so that the HCX Connector has the latest vCenter CloudAdmin and NSX-T admin credentials.
+#Customer intent: As an Azure service administrator, I want to rotate my cloudadmin credentials so that the HCX Connector has the latest vCenter CloudAdmin credentials.
# Rotate the cloudadmin credentials for Azure VMware Solution
-In this article, you'll rotate the cloudadmin credentials (vCenter and NSX-T credentials) for your Azure VMware Solution private cloud. Although the passwords for these accounts don't expire, you can generate new ones. After generating new passwords, you must update VMware HCX Connector with the latest credentials applied.
+In this article, you'll rotate the cloudadmin credentials (vCenter Server *CloudAdmin* credentials) for your Azure VMware Solution private cloud. Although the password for this account doesn't expire, you can generate a new one at any time. After generating a new password, you must update VMware HCX Connector with the latest password.
-You can also watch a video on how to [reset the vCenter CloudAdmin & NSX-T admin password](https://youtu.be/cK1qY3knj88).
+>[!IMPORTANT]
+>Currently, rotating your NSX-T Manager *admin* credentials isn't supported.
-## Prerequisites
-
-If you use your cloudadmin credentials for connected services like HCX, vRealize Orchestrator, vRealize Operations Manager, or VMware Horizon, your connections stop working once you update your password. So stop these services before initiating the password rotation. Otherwise, you'll experience temporary locks on your vCenter CloudAdmin and NSX-T admin accounts, as these services continuously call using your old credentials. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](./concepts-identity.md).
-## Reset your Azure VMware Solution cloudadmin credentials
+## Prerequisites
-In this step, you'll rotate the cloudadmin credentials for your Azure VMware Solution components.
+If you use your cloudadmin credentials for connected services like HCX, vRealize Orchestrator, vRealize Operations Manager, or VMware Horizon, your connections stop working once you update your password. So stop these services before initiating the password rotation. Otherwise, you'll experience temporary locks on your vCenter CloudAdmin account, as these services continuously call your old credentials. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](./concepts-identity.md).
->[!NOTE]
->Remember to replace **{SubscriptionID}**, **{ResourceGroup}**, and **{PrivateCloudName}** with you private cloud information.
+## Reset your vCenter credentials
1. From the Azure portal, open an Azure Cloud Shell session.
-2. Update your vCenter CloudAdmin password.
+2. Update your vCenter *CloudAdmin* credentials. Remember to replace **{SubscriptionID}**, **{ResourceGroup}**, and **{PrivateCloudName}** with your private cloud information.
```azurecli-interactive az resource invoke-action --action rotateVcenterPassword --ids "/subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroup}/providers/Microsoft.AVS/privateClouds/{PrivateCloudName}" --api-version "2020-07-17-preview" ```
-
-3. Update your NSX-T admin password.
-
- ```azurecli-interactive
- az resource invoke-action --action rotateNSXTPassword --ids "/subscriptions/{SubscriptionID}/resourceGroups/{ResourceGroup}/providers/Microsoft.AVS/privateClouds/{PrivateCloudName}" --api-version "2020-07-17-preview"
- ```
-
-## Update HCX Connector with the latest cloudadmin credentials
-
-In this step, you'll update HCX Connector with the updated credentials.
+
+## Update HCX Connector
1. Go to the on-premises HCX Connector at https://{ip of the HCX connector appliance}:443 and sign in using the new credentials.
- Be sure to use port 443.
+ Be sure to use port **443**.
2. On the VMware HCX Dashboard, select **Site Pairing**.
- :::image type="content" source="media/rotate-cloudadmin-credentials/hcx-site-pairing.png" alt-text="Screenshot of VMware HCX Dashboard with Site Pairing highlighted.":::
+ :::image type="content" source="media/tutorial-vmware-hcx/site-pairing-complete.png" alt-text="Screenshot of VMware HCX Dashboard with Site Pairing highlighted.":::
3. Select the correct connection to Azure VMware Solution and select **Edit Connection**.
-4. Provide the new vCenter Server CloudAdmin user credentials and select **Edit**, which saves the credentials. Save should show successful.
+4. Provide the new vCenter user credentials and select **Edit**, which saves the credentials. Save should show successful.
+ ## Next steps
-Now that you've covered resetting vCenter Server and NSX-T Manager credentials for Azure VMware Solution, you may want to learn about:
+Now that you've covered resetting your vCenter credentials for Azure VMware Solution, you may want to learn about:
- [Integrating Azure native services in Azure VMware Solution](integrate-azure-native-services.md) - [Deploying disaster recovery for Azure VMware Solution workloads using VMware HCX](deploy-disaster-recovery-using-vmware-hcx.md)
azure-web-pubsub Tutorial Build Chat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-build-chat.md
Copy the fetched **ConnectionString** and it will be used later in this tutorial
* [Node.js 12.x or above](https://nodejs.org)
+# [Java](#tab/java)
+
+- [Java Development Kit (JDK)](/java/azure/jdk/) version 8 or above
+- [Apache Maven](https://maven.apache.org/download.cgi)
+ ## Create the application
First let's create an empty ASP.NET Core app.
</html> ```
-You can test the server by running `dotnet run` and access `http://localhost:5000/https://docsupdatetracker.net/index.html` in browser.
+You can test the server by running `dotnet run --urls http://localhost:8080` and access http://localhost:8080/https://docsupdatetracker.net/index.html in browser.
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-sub-messages.md) the subscriber uses an API in Web PubSub SDK to generate an access token from connection string and use it to connect to the service. This is usually not safe in a real world application as connection string has high privilege to do any operation to the service so you don't want to share it with any client. Let's change this access token generation process to a REST API at server side, so client can call this API to request an access token every time it needs to connect, without need to hold the connection string.
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
This token generation code is similar to the one we used in the [publish and subscribe message tutorial](./tutorial-pub-sub-messages.md), except we pass one more argument (`userId`) when generating the token. User ID can be used to identify the identity of client so when you receive a message you know where the message is coming from.
- You can test this API by running `dotnet run` and accessing `http://localhost:5000/negotiate?id=<user-id>` and it will give you the full url of the Azure Web PubSub with an access token.
+ You can test this API by running `dotnet run --urls http://localhost:8080` and accessing `http://localhost:8080/negotiate?id=<user-id>` and it will give you the full url of the Azure Web PubSub with an access token.
-3. Then update `https://docsupdatetracker.net/index.html` with the following script to get the token from server and connect to service
+3. Then update `https://docsupdatetracker.net/index.html` to include the following script to get the token from server and connect to service
```html
- <script>
- (async function () {
- let id = prompt('Please input your user name');
- let res = await fetch(`/negotiate?id=${id}`);
- let url = await res.text();
- let ws = new WebSocket(url);
- ws.onopen = () => console.log('connected');
- })();
- </script>
+ <html>
+ <body>
+ <h1>Azure Web PubSub Chat</h1>
+ </body>
+
+ <script>
+ (async function () {
+ let id = prompt('Please input your user name');
+ let res = await fetch(`/negotiate?id=${id}`);
+ let url = await res.text();
+ let ws = new WebSocket(url);
+ ws.onopen = () => console.log('connected');
+ })();
+ </script>
+ </html>
```
- You can test it by opening the home page, input your user name, then you'll see `connected` being printed in browser console.
+ If you are using Chrome, you can test it by opening the home page, input your user name. press F12 to open the Developer Tools window, switch to **Console** table and you'll see `connected` being printed in browser console.
# [JavaScript](#tab/javascript)
First create an empty express app.
</html> ```
-You can test the server by running `node server` and access `http://localhost:8080` in browser.
+You can test the server by running `node server` and access http://localhost:8080 in browser.
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-sub-messages.md) the subscriber uses an API in Web PubSub SDK to generate an access token from connection string and use it to connect to the service. This is usually not safe in a real world application as connection string has high privilege to do any operation to the service so you don't want to share it with any client. Let's change this access token generation process to a REST API at server side, so client can call this API to request an access token every time it needs to connect, without need to hold the connection string.
You may remember in the [publish and subscribe message tutorial](./tutorial-pub-
3. Then update `https://docsupdatetracker.net/index.html` with the following script to get the token from server and connect to service ```html
- <script>
- (async function () {
- let id = prompt('Please input your user name');
- let res = await fetch(`/negotiate?id=${id}`);
- let data = await res.json();
- let ws = new WebSocket(data.url);
- ws.onopen = () => console.log('connected');
- })();
- </script>
+
+ <html>
+ <body>
+ <h1>Azure Web PubSub Chat</h1>
+ </body>
+
+ <script>
+ (async function () {
+ let id = prompt('Please input your user name');
+ let res = await fetch(`/negotiate?id=${id}`);
+ let data = await res.json();
+ let ws = new WebSocket(data.url);
+ ws.onopen = () => console.log('connected');
+ })();
+ </script>
+ </html>
+ ```
+
+ If you are using Chrome, you can test it by opening the home page, input your user name. press F12 to open the Developer Tools window, switch to **Console** table and you'll see `connected` being printed in browser console.
+
+# [Java](#tab/java)
+
+We will use the [Javalin](https://javalin.io/) web framework to host the web pages and handle incoming requests.
+
+1. First let's use Maven to create a new app `webpubsub-tutorial-chat` and switch into the *webpubsub-tutorial-chat* folder:
+
+ ```console
+ mvn archetype:generate --define interactiveMode=n --define groupId=com.webpubsub.tutorial --define artifactId=webpubsub-tutorial-chat --define archetypeArtifactId=maven-archetype-quickstart --define archetypeVersion=1.4
+ cd webpubsub-tutorial-chat
```
- You can test it by opening the home page, input your user name, then you'll see `connected` being printed in browser console.
+2. Let's add the `javalin` web framework dependency into the `dependencies` node of `pom.xml`:
+
+ * `javalin`: simple web framework for Java
+ * `slf4j-simple`: Logger for Java
+
+ ```xml
+ <!-- https://mvnrepository.com/artifact/io.javalin/javalin -->
+ <dependency>
+ <groupId>io.javalin</groupId>
+ <artifactId>javalin</artifactId>
+ <version>3.13.6</version>
+ </dependency>
+
+ <dependency>
+ <groupId>org.slf4j</groupId>
+ <artifactId>slf4j-simple</artifactId>
+ <version>1.7.30</version>
+ </dependency>
+ ```
+
+3. Let's navigate to the */src/main/java/com/webpubsub/tutorial* directory, open the *App.java* file in your editor, use `Javalin.create` to serve static files:
+
+ ```java
+ package com.webpubsub.tutorial;
+
+ import io.javalin.Javalin;
+
+ public class App {
+ public static void main(String[] args) {
+ // start a server
+ Javalin app = Javalin.create(config -> {
+ config.addStaticFiles("public");
+ }).start(8080);
+ }
+ }
+ ```
+
+ Depending on your setup, you might need to explicitly set the language level to Java 8. This can be done in the pom.xml. Add the following snippet:
+ ```xml
+ <build>
+ <plugins>
+ <plugin>
+ <artifactId>maven-compiler-plugin</artifactId>
+ <version>3.8.0</version>
+ <configuration>
+ <source>1.8</source>
+ <target>1.8</target>
+ </configuration>
+ </plugin>
+ </plugins>
+ </build>
+ ```
+
+4. Let's create an HTML file and save it into */src/main/resources/public/https://docsupdatetracker.net/index.html*. We'll use it for the UI of the chat app later.
+
+ ```html
+ <html>
+ <body>
+ <h1>Azure Web PubSub Chat</h1>
+ </body>
+
+ </html>
+ ```
+
+You can test the server by running the following command under the directory containing the *pom.xml* file, and access http://localhost:8080 in browser.
+
+```console
+mvn compile & mvn package & mvn exec:java -Dexec.mainClass="com.webpubsub.tutorial.App" -Dexec.cleanupDaemonThreads=false
+```
+
+You may remember in the [publish and subscribe message tutorial](./tutorial-pub-sub-messages.md) the subscriber uses an API in Web PubSub SDK to generate an access token from connection string and use it to connect to the service. This is usually not safe in a real world application as connection string has high privilege to do any operation to the service so you don't want to share it with any client. Let's change this access token generation process to a REST API at server side, so client can call this API to request an access token every time it needs to connect, without need to hold the connection string.
+
+1. Add Azure Web PubSub SDK dependency into the `dependencies` node of `pom.xml`:
+
+ ```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-messaging-webpubsub</artifactId>
+ <version>1.0.0-beta.2</version>
+ </dependency>
+ ```
+
+2. Add a `/negotiate` API to the `App.java` file to generate the token:
+
+ ```java
+ package com.webpubsub.tutorial;
+
+ import com.azure.messaging.webpubsub.WebPubSubClientBuilder;
+ import com.azure.messaging.webpubsub.WebPubSubServiceClient;
+ import com.azure.messaging.webpubsub.models.GetAuthenticationTokenOptions;
+ import com.azure.messaging.webpubsub.models.WebPubSubAuthenticationToken;
+
+ import io.javalin.Javalin;
+
+ public class App {
+ public static void main(String[] args) {
+
+ if (args.length != 1) {
+ System.out.println("Expecting 1 arguments: <connection-string>");
+ return;
+ }
+
+ // create the service client
+ WebPubSubServiceClient client = new WebPubSubClientBuilder()
+ .connectionString(args[0])
+ .hub("chat")
+ .buildClient();
+
+ // start a server
+ Javalin app = Javalin.create(config -> {
+ config.addStaticFiles("public");
+ }).start(8080);
+
+
+ // Handle the negotiate request and return the token to the client
+ app.get("/negotiate", ctx -> {
+ String id = ctx.queryParam("id");
+ if (id == null) {
+ ctx.status(400);
+ ctx.result("missing user id");
+ return;
+ }
+ GetAuthenticationTokenOptions option = new GetAuthenticationTokenOptions();
+ option.setUserId(id);
+ WebPubSubAuthenticationToken token = client.getAuthenticationToken(option);
+ ctx.result(token.getUrl());
+ return;
+ });
+ }
+ }
+ ```
+
+ This token generation code is similar to the one we used in the [publish and subscribe message tutorial](./tutorial-pub-sub-messages.md), except we call `setUserId` method to set the user ID when generating the token. User ID can be used to identify the identity of client so when you receive a message you know where the message is coming from.
+
+ You can test this API by running the following command, replacing `<connection_string>` with the **ConnectionString** fetched in [previous step](#get-the-connectionstring-for-future-use), and accessing `http://localhost:8080/negotiate?id=<user-id>` and it will give you the full url of the Azure Web PubSub with an access token.
+
+ ```console
+ mvn compile & mvn package & mvn exec:java -Dexec.mainClass="com.webpubsub.tutorial.App" -Dexec.cleanupDaemonThreads=false -Dexec.args="'<connection_string>'"
+ ```
+
+3. Then update `https://docsupdatetracker.net/index.html` with the following script to get the token from the server and connect to the service.
+
+ ```html
+ <html>
+ <body>
+ <h1>Azure Web PubSub Chat</h1>
+ </body>
+
+ <script>
+ (async function () {
+ let id = prompt('Please input your user name');
+ let res = await fetch(`/negotiate?id=${id}`);
+ let url = await res.text();
+ let ws = new WebSocket(url);
+ ws.onopen = () => console.log('connected');
+ })();
+ </script>
+ </html>
+ ```
+
+ If you are using Chrome, you can test it by opening the home page, input your user name. press F12 to open the Developer Tools window, switch to **Console** table and you'll see `connected` being printed in browser console.
For now, you need to implement the event handler by your own in C#, the steps ar
{ if (context.Request.Method == "OPTIONS") {
- ...
+ if (context.Request.Headers["WebHook-Request-Origin"].Count > 0)
+ {
+ context.Response.Headers["WebHook-Allowed-Origin"] = "*";
+ context.Response.StatusCode = 200;
+ return;
+ }
} else if (context.Request.Method == "POST") {
app.use(handler.getMiddleware());
In the above code, we simply print a message to console when a client is connected. You can see we use `req.context.userId` so we can see the identity of the connected client.
+# [Java](#tab/java)
+For now, you need to implement the event handler by your own in Java, the steps are straight forward following [the protocol spec](./reference-cloud-events.md) and illustrated below.
+
+1. Add HTTP handler for the event handler path, let's say `/eventhandler`.
+
+2. First we'd like to handle the abuse protection OPTIONS requests, we check if the header contains `WebHook-Request-Origin` header, and we return the header `WebHook-Allowed-Origin`. For simplicity for demo purpose, we return `*` to allow all the origins.
+ ```java
+
+ // validation: https://azure.github.io/azure-webpubsub/references/protocol-cloudevents#validation
+ app.options("/eventhandler", ctx -> {
+ ctx.header("WebHook-Allowed-Origin", "*");
+ });
+ ```
+
+3. Then we'd like to check if the incoming requests are the events we expect. Let's say we now care about the system `connected` event, which should contain the header `ce-type` as `azure.webpubsub.sys.connected`. We add the logic after abuse protection:
+ ```java
+ // validation: https://azure.github.io/azure-webpubsub/references/protocol-cloudevents#validation
+ app.options("/eventhandler", ctx -> {
+ ctx.header("WebHook-Allowed-Origin", "*");
+ });
+
+ // handle events: https://azure.github.io/azure-webpubsub/references/protocol-cloudevents#events
+ app.post("/eventhandler", ctx -> {
+ String event = ctx.header("ce-type");
+ if ("azure.webpubsub.sys.connected".equals(event)) {
+ String id = ctx.header("ce-userId");
+ System.out.println(id + " connected.");
+ }
+ ctx.status(200);
+ });
+
+ ```
+
+In the above code, we simply print a message to console when a client is connected. You can see we use `ctx.header("ce-userId")` so we can see the identity of the connected client.
+ ## Set up the event handler
Then we need to set the Webhook URL in the service so it can know where to call
1. First download ngrok from https://ngrok.com/download, extract the executable to your local folder or your system bin folder. 2. Start ngrok
+
```bash ngrok http 8080 ```
ngrok will print a URL (`https://<domain-name>.ngrok.io`) that can be accessed f
Then we update the service event handler and set the Webhook URL.
+Use the Azure CLI [az webpubsub event-handler hub](/cli/azure/webpubsub/event-handler/hub) command to update the event handler settings:
+
+ > [!Important]
+ > Replace &lt;your-unique-resource-name&gt; with the name of your Web PubSub resource created from the previous steps.
+ > Replace &lt;domain-name&lt; with the name ngrok printed.
+
+```azurecli-interactive
+az webpubsub event-handler hub update -n "<your-unique-resource-name>" -g "myResourceGroup" --hub-name chat --template url-template="https://<domain-name>.ngrok.io/eventHandler" user-event-pattern="*" system-event-pattern="connected"
+```
+
-After the update is completed, open the home page http://localhost:5000/https://docsupdatetracker.net/index.html, input your user name, youΓÇÖll see the connected message printed in the server console.
+After the update is completed, open the home page http://localhost:8080/https://docsupdatetracker.net/index.html, input your user name, youΓÇÖll see the connected message printed in the server console.
## Handle Message events
The `ce-type` of `message` event is always `azure.webpubsub.user.message`, detai
var serviceClient = context.RequestServices.GetRequiredService<WebPubSubServiceClient>(); if (context.Request.Method == "OPTIONS") {
- ...
+ if (context.Request.Headers["WebHook-Request-Origin"].Count > 0)
+ {
+ context.Response.Headers["WebHook-Allowed-Origin"] = "*";
+ context.Response.StatusCode = 200;
+ return;
+ }
} else if (context.Request.Method == "POST") {
The `ce-type` of `message` event is always `azure.webpubsub.user.message`, detai
<div id="messages"></div> <script> (async function () {
- ...
+ let id = prompt('Please input your user name');
+ let res = await fetch(`/negotiate?id=${id}`);
+ let url = await res.text();
+ let ws = new WebSocket(url);
+ ws.onopen = () => console.log('connected');
let messages = document.querySelector('#messages'); ws.onmessage = event => {
The `ce-type` of `message` event is always `azure.webpubsub.user.message`, detai
{ if (context.Request.Method == "OPTIONS") {
- ...
+ if (context.Request.Headers["WebHook-Request-Origin"].Count > 0)
+ {
+ context.Response.Headers["WebHook-Allowed-Origin"] = "*";
+ context.Response.StatusCode = 200;
+ return;
+ }
} else if (context.Request.Method == "POST") {
The `ce-type` of `message` event is always `azure.webpubsub.user.message`, detai
}); ```
-Now run the server and open multiple browser instances, then you can chat with each other.
-
-The complete code sample of this tutorial can be found [here][code].
+Now run the server using `dotnet run --urls http://localhost:8080` and open multiple browser instances to access http://localhost:8080/https://docsupdatetracker.net/index.html, then you can chat with each other.
-[code]: https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/chatapp/
+The complete code sample of this tutorial can be found [here][code-csharp].
# [JavaScript](#tab/javascript)
The complete code sample of this tutorial can be found [here][code].
let handler = new WebPubSubEventHandler(hubName, ['*'], { path: '/eventhandler', onConnected: async req => {
- ...
+ console.log(`${req.context.userId} connected`);
}, handleUserEvent: async (req, res) => { if (req.context.eventName === 'message')
The complete code sample of this tutorial can be found [here][code].
You can see `handleUserEvent` also has a `res` object where you can send message back to the event sender. Here we simply call `res.success()` to make the WebHook return 200 (note this call is required even you don't want to return anything back to client, otherwise the WebHook never returns and client connection will be closed).
+2. Update `https://docsupdatetracker.net/index.html` to add the logic to send message from user to server and display received messages in the page.
+
+ ```html
+ <html>
+
+ <body>
+ <h1>Azure Web PubSub Chat</h1>
+ <input id="message" placeholder="Type to chat...">
+ <div id="messages"></div>
+ <script>
+ (async function () {
+ let id = prompt('Please input your user name');
+ let res = await fetch(`/negotiate?id=${id}`);
+ let data = await res.json();
+ let ws = new WebSocket(data.url);
+ ws.onopen = () => console.log('connected');
+
+ let messages = document.querySelector('#messages');
+ ws.onmessage = event => {
+ let m = document.createElement('p');
+ m.innerText = event.data;
+ messages.appendChild(m);
+ };
+
+ let message = document.querySelector('#message');
+ message.addEventListener('keypress', e => {
+ if (e.charCode !== 13) return;
+ ws.send(message.value);
+ message.value = '';
+ });
+ })();
+ </script>
+ </body>
+
+ </html>
+ ```
+
+ You can see in the above code we use `WebSocket.send()` to send message and `WebSocket.onmessage` to listen to message from service.
+
+3. `sendToAll` accepts object as an input and send JSON text to the clients. In real scenarios, we probably need complex object to carry more information about the message. Finally update the handlers to broadcast JSON objects to all clients:
+
+ ```javascript
+ let handler = new WebPubSubEventHandler(hubName, ['*'], {
+ path: '/eventhandler',
+ onConnected: async req => {
+ console.log(`${req.context.userId} connected`);
+ await serviceClient.sendToAll({
+ type: "system",
+ message: `${req.context.userId} joined`
+ });
+ },
+ handleUserEvent: async (req, res) => {
+ if (req.context.eventName === 'message') {
+ await serviceClient.sendToAll({
+ from: req.context.userId,
+ message: req.data
+ });
+ }
+ res.success();
+ }
+ });
+ ```
+
+4. And update the client to parse JSON data:
+ ```html
+ <html>
+
+ <body>
+ <h1>Azure Web PubSub Chat</h1>
+ <input id="message" placeholder="Type to chat...">
+ <div id="messages"></div>
+ <script>
+ (async function () {
+ let id = prompt('Please input your user name');
+ let res = await fetch(`/negotiate?id=${id}`);
+ let data = await res.json();
+ let ws = new WebSocket(data.url);
+ ws.onopen = () => console.log('connected');
+
+ let messages = document.querySelector('#messages');
+
+ ws.onmessage = event => {
+ let m = document.createElement('p');
+ let data = JSON.parse(event.data);
+ m.innerText = `[${data.type || ''}${data.from || ''}] ${data.message}`;
+ messages.appendChild(m);
+ };
+
+ let message = document.querySelector('#message');
+ message.addEventListener('keypress', e => {
+ if (e.charCode !== 13) return;
+ ws.send(message.value);
+ message.value = '';
+ });
+ })();
+ </script>
+ </body>
+
+ </html>
+ ```
+
+Now run the server and open multiple browser instances, then you can chat with each other.
+
+The complete code sample of this tutorial can be found [here][code-js].
+
+# [Java](#tab/java)
+
+The `ce-type` of `message` event is always `azure.webpubsub.user.message`, details see [Event message](./reference-cloud-events.md#message).
+
+1. Handle message event
+
+ ```java
+ // handle events: https://azure.github.io/azure-webpubsub/references/protocol-cloudevents#events
+ app.post("/eventhandler", ctx -> {
+ String event = ctx.header("ce-type");
+ if ("azure.webpubsub.sys.connected".equals(event)) {
+ String id = ctx.header("ce-userId");
+ System.out.println(id + " connected.");
+ } else if ("azure.webpubsub.user.message".equals(event)) {
+ String id = ctx.header("ce-userId");
+ String message = ctx.body();
+ client.sendToAll(String.format("[%s] %s", id, message), WebPubSubContentType.TEXT_PLAIN);
+ }
+ ctx.status(200);
+ });
+ ```
+
+ This event handler uses `client.sendToAll()` to broadcast the received message to all clients.
+ 2. Update `https://docsupdatetracker.net/index.html` to add the logic to send message from user to server and display received messages in the page. ```html
The complete code sample of this tutorial can be found [here][code].
<div id="messages"></div> <script> (async function () {
- ...
+ let id = prompt('Please input your user name');
+ let res = await fetch(`/negotiate?id=${id}`);
+ let url = await res.text();
+ let ws = new WebSocket(url);
+ ws.onopen = () => console.log('connected');
let messages = document.querySelector('#messages'); ws.onmessage = event => {
The complete code sample of this tutorial can be found [here][code].
You can see in the above code we use `WebSocket.send()` to send message and `WebSocket.onmessage` to listen to message from service.
-3. `sendToAll` accepts object as an input and send JSON text to the clients. In real scenarios, we probably need complex object to carry more information about the message. Finally update the handlers to broadcast JSON objects to all clients:
+3. Finally update the `connected` event handler to broadcast the connected event to all clients so they can see who joined the chat room.
- ```javascript
- let handler = new WebPubSubEventHandler(hubName, ['*'], {
- path: '/eventhandler',
- onConnected: async req => {
- console.log(`${req.context.userId} connected`);
- await serviceClient.sendToAll({
- type: "system",
- message: `${req.context.userId} joined`
- });
- },
- handleUserEvent: async (req, res) => {
- if (req.context.eventName === 'message') {
- await serviceClient.sendToAll({
- from: req.context.userId,
- message: req.data
- });
+ ```java
+
+ // handle events: https://azure.github.io/azure-webpubsub/references/protocol-cloudevents#events
+ app.post("/eventhandler", ctx -> {
+ String event = ctx.header("ce-type");
+ if ("azure.webpubsub.sys.connected".equals(event)) {
+ String id = ctx.header("ce-userId");
+ client.sendToAll(String.format("[SYSTEM] %s joined", id), WebPubSubContentType.TEXT_PLAIN);
+ } else if ("azure.webpubsub.user.message".equals(event)) {
+ String id = ctx.header("ce-userId");
+ String message = ctx.body();
+ client.sendToAll(String.format("[%s] %s", id, message), WebPubSubContentType.TEXT_PLAIN);
}
- res.success();
- }
+ ctx.status(200);
});
- ```
-4. And update the client to parse JSON data:
- ```javascript
- ws.onmessage = event => {
- let m = document.createElement('p');
- let data = JSON.parse(event.data);
- m.innerText = `[${data.type || ''}${data.from || ''}] ${data.message}`;
- messages.appendChild(m);
- };
```
-Now run the server and open multiple browser instances, then you can chat with each other.
+Now run the server with the below command and open multiple browser instances, then you can chat with each other.
-The complete code sample of this tutorial can be found [here][code].
+```console
+mvn compile & mvn package & mvn exec:java -Dexec.mainClass="com.webpubsub.tutorial.App" -Dexec.cleanupDaemonThreads=false -Dexec.args="'<connection_string>'"
+```
-[code]: https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/chatapp/
+The complete code sample of this tutorial can be found [here][code-java].
Check other tutorials to further dive into how to use the service.
> [!div class="nextstepaction"] > [Explore more Azure Web PubSub samples](https://aka.ms/awps/samples)++
+[code-js]: https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/chatapp/
+[code-java]: https://github.com/Azure/azure-webpubsub/tree/main/samples/java/chatapp/
+[code-csharp]: https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/chatapp/
azure-web-pubsub Tutorial Pub Sub Messages https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-web-pubsub/tutorial-pub-sub-messages.md
Copy the fetched **ConnectionString** and it will be used later in this tutorial
* [Python](https://www.python.org/) # [Java](#tab/java)-- [Java Development Kit (JDK)](/java/azure/jdk/) version 8 or above.-- [Apache Maven](https://maven.apache.org/download.cgi).
+- [Java Development Kit (JDK)](/java/azure/jdk/) version 8 or above
+- [Apache Maven](https://maven.apache.org/download.cgi)
Clients connect to the Azure Web PubSub service through the standard WebSocket p
``` # [Java](#tab/java)
+1. First let's create a new folder `pubsub` for this tutorial
+ ```cmd
+ mkdir pubsub
+ cd pubsub
+ ```
-1. First let's use Maven to create a new console app `webpubsub-quickstart-subscriber` and switch into the *webpubsub-quickstart-subscriber* folder:
+1. Then inside this `pubsub` folder let's use Maven to create a new console app `webpubsub-quickstart-subscriber` and switch into the *webpubsub-quickstart-subscriber* folder:
```console mvn archetype:generate --define interactiveMode=n --define groupId=com.webpubsub.quickstart --define artifactId=webpubsub-quickstart-subscriber --define archetypeArtifactId=maven-archetype-quickstart --define archetypeVersion=1.4
Clients connect to the Azure Web PubSub service through the standard WebSocket p
After connection is established, you'll receive messages through the WebSocket connection. So we use `onMessage(String message)` to listen to incoming messages.
-4. Navigate to the directory containing the *pom.xml* file and compile the project by using the following `mvn` command.
-
- ```console
- mvn compile
- ```
-5. Then build the package
-
- ```console
- mvn package
- ```
-6. Run the following `mvn` command to execute the app, replacing `<connection_string>` with the **ConnectionString** fetched in [previous step](#get-the-connectionstring-for-future-use):
+4. Navigate to the directory containing the *pom.xml* file and run the app with below code, replacing `<connection_string>` with the **ConnectionString** fetched in [previous step](#get-the-connectionstring-for-future-use):
```console
- mvn exec:java -Dexec.mainClass="com.webpubsub.quickstart.App" -Dexec.cleanupDaemonThreads=false -Dexec.args="'<connection_string>' 'myHub1'"
+ mvn compile & mvn package & mvn exec:java -Dexec.mainClass="com.webpubsub.quickstart.App" -Dexec.cleanupDaemonThreads=false -Dexec.args="'<connection_string>' 'myHub1'"
```
Now let's use Azure Web PubSub SDK to publish a message to the connected client.
# [Java](#tab/java)
-1. First let's use Maven to create a new console app `webpubsub-quickstart-publisher` and switch into the *webpubsub-quickstart-publisher* folder:
+1. Let's use another terminal and go back to the `pubsub` folder to create a publisher console app `webpubsub-quickstart-publisher` and switch into the *webpubsub-quickstart-publisher* folder:
```console mvn archetype:generate --define interactiveMode=n --define groupId=com.webpubsub.quickstart --define artifactId=webpubsub-quickstart-publisher --define archetypeArtifactId=maven-archetype-quickstart --define archetypeVersion=1.4 cd webpubsub-quickstart-publisher
Now let's use Azure Web PubSub SDK to publish a message to the connected client.
The `sendToAll()` call simply sends a message to all connected clients in a hub.
-4. Navigate to the directory containing the *pom.xml* file and compile the project by using the following `mvn` command.
-
- ```console
- mvn compile
- ```
-5. Then build the package
-
- ```console
- mvn package
- ```
-6. Run the following `mvn` command to execute the app, replacing `<connection_string>` with the **ConnectionString** fetched in [previous step](#get-the-connectionstring-for-future-use):
+4. Navigate to the directory containing the *pom.xml* file and run the project using the below command, replacing `<connection_string>` with the **ConnectionString** fetched in [previous step](#get-the-connectionstring-for-future-use):
```console
- mvn exec:java -Dexec.mainClass="com.webpubsub.quickstart.App" -Dexec.cleanupDaemonThreads=false -Dexec.args="'<connection_string>' 'myHub1' 'Hello World'"
+ mvn compile & mvn package & mvn exec:java -Dexec.mainClass="com.webpubsub.quickstart.App" -Dexec.cleanupDaemonThreads=false -Dexec.args="'<connection_string>' 'myHub1' 'Hello World'"
```
-7. You can see that the previous subscriber received the below message:
+5. You can see that the previous subscriber received the below message:
``` Message received: Hello World
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/archive-tier-support.md
Title: Archive Tier support description: Learn about Archive Tier Support for Azure Backup Previously updated : 08/23/2021 Last updated : 08/25/2021
Supported clients:
- The capability is provided using PowerShell >[!Note]
->Archive Tier support for SQL Servers in Azure VMs is now generally available in North Europe, Central India, South East Asia, and Australia East. For the detailed list of supported regions, refer to the [support matrix](#support-matrix). <br><br> For the remaining regions for SQL Servers in Azure VMs, Archive Tier support is in limited public preview. Archive Tier support for Azure Virtual Machines is also in limited public preview. To sign up for limited public preview, use this [link](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR463S33c54tEiJLEM6Enqb9UNU5CVTlLVFlGUkNXWVlMNlRPM1lJWUxLRy4u).
+>Archive Tier support for SQL Servers in Azure VMs is now generally available in multiple regions. For the detailed list of supported regions, see the [support matrix](#support-matrix). <br><br> For the remaining regions for SQL Servers in Azure VMs, Archive Tier support is in limited public preview. Archive Tier support for Azure Virtual Machines is also in limited public preview. To sign up for limited public preview, use this [link](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR463S33c54tEiJLEM6Enqb9UNU5CVTlLVFlGUkNXWVlMNlRPM1lJWUxLRy4u).
## Get started with PowerShell
Stop protection and delete data deletes all the recovery points. For recovery po
| Workloads | Preview | Generally available | | | | | | SQL Server in Azure VM | East US, South Central US, North Central US, West Europe, UK South | Australia East, Central India, North Europe, South East Asia, East Asia, Australia South East, Canada Central, Brazil South, Canada East, France Central, France South, Japan East, Japan West, Korea Central, Korea South, South India, UK West, Central US, East US 2, West US, West US 2, West Central US |
-| Azure Virtual Machines | East US, East US 2, Central US, South Central US, West US, West US 2, West Central US, North Central US, Brazil South, Canada East, Canada Central, West Europe, UK South, UK West, East Asia, Japan East, South India, South East Asia, Australia East, Central India, North Europe | None |
+| Azure Virtual Machines | East US, East US 2, Central US, South Central US, West US, West US 2, West Central US, North Central US, Brazil South, Canada East, Canada Central, West Europe, UK South, UK West, East Asia, Japan East, South India, South East Asia, Australia East, Central India, North Europe, Australia South East, France Central, France South, Japan West, Korea Central, Korea South | None |
## Error codes and troubleshooting steps
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/language-support.md
Computer Vision's OCR APIs support several languages. They do not require you to
|Language| Language code | Read 3.2 | OCR API | Read 3.0/3.1 | |:--|:-:|:--:|::|::|
-|Afrikaans|`af`|Γ£ö | | |
-|Albanian |`sq`|Γ£ö | | |
-|Arabic | `ar`| | Γ£ö | |
-|Asturian |`ast`|Γ£ö | | |
-|Basque |`eu`| Γ£ö | | |
-|Bislama |`bi`|Γ£ö | | |
-|Breton |`br`|Γ£ö | | |
-|Catalan |`ca`|Γ£ö | | |
-|Cebuano |`ceb`|Γ£ö | | |
-|Chamorro |`ch`|Γ£ö| | |
-|Chinese Simplified | `zh-Hans`|Γ£ö |Γ£ö | |
-|Chinese Traditional | `zh-Hant`|Γ£ö |Γ£ö | |
-|Cornish |`kw`|Γ£ö | | |
-|Corsican |`co`|Γ£ö | | |
-|Crimean Tatar Latin |`crh`| Γ£ö | | |
-|Czech | `cs` |Γ£ö | Γ£ö | |
-|Danish | `da` |Γ£ö | Γ£ö | |
-|Dutch | `nl` |Γ£ö |Γ£ö |Γ£ö |
-|English (incl. handwritten) | `en` |Γ£ö |Γ£ö (print only)|Γ£ö |
-|Estonian |`et`|Γ£ö | | |
-|Fijian |`fj`|Γ£ö | | |
-|Filipino |`fil`|Γ£ö | | |
-|Finnish | `fi` |Γ£ö |Γ£ö | |
-|French | `fr` |Γ£ö |Γ£ö |Γ£ö |
-|Friulian | `fur` |Γ£ö | | |
-|Galician | `gl` |Γ£ö | | |
-|German | `de` |Γ£ö |Γ£ö |Γ£ö |
-|Gilbertese | `gil` |Γ£ö | | |
-|Greek | `el` | |Γ£ö | |
-|Greenlandic | `kl` |Γ£ö | | |
-|Haitian Creole | `ht` |Γ£ö | | |
-|Hani | `hni` |Γ£ö | | |
-|Hmong Daw Latin | `mww` | Γ£ö | | |
-|Hungarian | `hu` | Γ£ö |Γ£ö | |
-|Indonesian | `id` |Γ£ö | | |
-|Interlingua | `ia` |Γ£ö | | |
-|Inuktitut Latin | `iu` | Γ£ö | | |
-|Irish | `ga` |Γ£ö | | |
-|Italian | `it` |Γ£ö |Γ£ö |Γ£ö |
-|Japanese | `ja` |Γ£ö |Γ£ö | |
-|Javanese | `jv` |Γ£ö | | |
-|K'iche' | `quc` |Γ£ö | | |
-|Kabuverdianu | `kea` |Γ£ö | | |
-|Kachin Latin | `kac` |Γ£ö | | |
-|Kara-Kalpak | `kaa` | Γ£ö | | |
-|Kashubian | `csb` |Γ£ö | | |
-|Khasi | `kha` | Γ£ö | | |
-|Korean | `ko` |Γ£ö |Γ£ö | |
-|Kurdish Latin | `kur` |Γ£ö | | |
-|Luxembourgish | `lb` | Γ£ö | | |
-|Malay Latin | `ms` | Γ£ö | | |
-|Manx | `gv` | Γ£ö | | |
-|Neapolitan | `nap` | Γ£ö | | |
-|Norwegian | `nb` | | Γ£ö | |
-|Norwegian | `no` | Γ£ö | | |
-|Occitan | `oc` | Γ£ö | | |
-|Polish | `pl` | Γ£ö |Γ£ö | |
-|Portuguese | `pt` |Γ£ö |Γ£ö |Γ£ö |
-|Romanian | `ro` | | Γ£ö | |
-|Romansh | `rm` | Γ£ö | | |
-|Russian | `ru` | |Γ£ö | |
-|Scots | `sco` | Γ£ö | | |
-|Scottish Gaelic | `gd` |Γ£ö | | |
-|Serbian Cyrillic | `sr-Cyrl` | |Γ£ö | |
-|Serbian Latin | `sr-Latn` | |Γ£ö | |
-|Slovak | `sk` | |Γ£ö | |
-|Slovenian | `slv` | Γ£ö || |
-|Spanish | `es` |Γ£ö |Γ£ö |Γ£ö |
-|Swahili Latin | `sw` |Γ£ö | | |
-|Swedish | `sv` |Γ£ö |Γ£ö | |
-|Tatar Latin | `tat` | Γ£ö | | |
-|Tetum | `tet` |Γ£ö | | |
-|Turkish | `tr` |Γ£ö | Γ£ö | |
-|Upper Sorbian | `hsb` |Γ£ö | | |
-|Uzbek Latin | `uz` |Γ£ö | | |
-|Volap├╝k | `vo` | Γ£ö | | |
-|Walser | `wae` | Γ£ö | | |
-|Western Frisian | `fy` | Γ£ö | | |
-|Yucatec Maya | `yua` | Γ£ö | | |
-|Zhuang | `za` |Γ£ö | | |
-|Zulu | `zu` | Γ£ö | | |
+|Afrikaans|`af`|✅ | | |
+|Albanian |`sq`|✅ | | |
+|Arabic | `ar`| | ✅ | |
+|Asturian |`ast`|✅ | | |
+|Basque |`eu`| ✅ | | |
+|Bislama |`bi`|✅ | | |
+|Breton |`br`|✅ | | |
+|Catalan |`ca`|✅ | | |
+|Cebuano |`ceb`|✅ | | |
+|Chamorro |`ch`|✅| | |
+|Chinese Simplified | `zh-Hans`|✅ |✅ | |
+|Chinese Traditional | `zh-Hant`|✅ |✅ | |
+|Cornish |`kw`|✅ | | |
+|Corsican |`co`|✅ | | |
+|Crimean Tatar Latin |`crh`| ✅ | | |
+|Czech | `cs` |✅ | ✅ | |
+|Danish | `da` |✅ | ✅ | |
+|Dutch | `nl` |✅ |✅ |✅ |
+|English (incl. handwritten) | `en` |✅ |✅ (print only)|✅ |
+|Estonian |`et`|✅ | | |
+|Fijian |`fj`|✅ | | |
+|Filipino |`fil`|✅ | | |
+|Finnish | `fi` |✅ |✅ | |
+|French | `fr` |✅ |✅ |✅ |
+|Friulian | `fur` |✅ | | |
+|Galician | `gl` |✅ | | |
+|German | `de` |✅ |✅ |✅ |
+|Gilbertese | `gil` |✅ | | |
+|Greek | `el` | |✅ | |
+|Greenlandic | `kl` |✅ | | |
+|Haitian Creole | `ht` |✅ | | |
+|Hani | `hni` |✅ | | |
+|Hmong Daw Latin | `mww` | ✅ | | |
+|Hungarian | `hu` | ✅ |✅ | |
+|Indonesian | `id` |✅ | | |
+|Interlingua | `ia` |✅ | | |
+|Inuktitut Latin | `iu` | ✅ | | |
+|Irish | `ga` |✅ | | |
+|Italian | `it` |✅ |✅ |✅ |
+|Japanese | `ja` |✅ |✅ | |
+|Javanese | `jv` |✅ | | |
+|K'iche' | `quc` |✅ | | |
+|Kabuverdianu | `kea` |✅ | | |
+|Kachin Latin | `kac` |✅ | | |
+|Kara-Kalpak | `kaa` | ✅ | | |
+|Kashubian | `csb` |✅ | | |
+|Khasi | `kha` | ✅ | | |
+|Korean | `ko` |✅ |✅ | |
+|Kurdish Latin | `kur` |✅ | | |
+|Luxembourgish | `lb` | ✅ | | |
+|Malay Latin | `ms` | ✅ | | |
+|Manx | `gv` | ✅ | | |
+|Neapolitan | `nap` | ✅ | | |
+|Norwegian | `nb` | | ✅ | |
+|Norwegian | `no` | ✅ | | |
+|Occitan | `oc` | ✅ | | |
+|Polish | `pl` | ✅ |✅ | |
+|Portuguese | `pt` |✅ |✅ |✅ |
+|Romanian | `ro` | | ✅ | |
+|Romansh | `rm` | ✅ | | |
+|Russian | `ru` | |✅ | |
+|Scots | `sco` | ✅ | | |
+|Scottish Gaelic | `gd` |✅ | | |
+|Serbian Cyrillic | `sr-Cyrl` | |✅ | |
+|Serbian Latin | `sr-Latn` | |✅ | |
+|Slovak | `sk` | |✅ | |
+|Slovenian | `slv` | ✅ || |
+|Spanish | `es` |✅ |✅ |✅ |
+|Swahili Latin | `sw` |✅ | | |
+|Swedish | `sv` |✅ |✅ | |
+|Tatar Latin | `tat` | ✅ | | |
+|Tetum | `tet` |✅ | | |
+|Turkish | `tr` |✅ | ✅ | |
+|Upper Sorbian | `hsb` |✅ | | |
+|Uzbek Latin | `uz` |✅ | | |
+|Volapük | `vo` | ✅ | | |
+|Walser | `wae` | ✅ | | |
+|Western Frisian | `fy` | ✅ | | |
+|Yucatec Maya | `yua` | ✅ | | |
+|Zhuang | `za` |✅ | | |
+|Zulu | `zu` | ✅ | | |
## Image analysis
-Some actions of the [Analyze - Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-g) for a list of all the actions you can do with image analysis.
+Some actions of the [Analyze - Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-1-g) for a list of all the actions you can do with image analysis. Languages for tagging are only available in API version 3.2 or later.
|Language | Language code | Categories | Tags | Description | Adult | Brands | Color | Faces | ImageType | Objects | Celebrities | Landmarks | |:|::|:-:|::|::|::|::|::|::|::|::|::|::|
-|Chinese | `zh` | ✔ | ✔| ✔|-|-|-|-|-|❌|✔|✔|
-|English | `en` | Γ£ö | Γ£ö| Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-|Japanese | `ja` | ✔ | ✔| ✔|-|-|-|-|-|❌|✔|✔|
-|Portuguese | `pt` | ✔ | ✔| ✔|-|-|-|-|-|❌|✔|✔|
-|Spanish | `es` | ✔ | ✔| ✔|-|-|-|-|-|❌|✔|✔|
+|Arabic |`ar`| | ✅| |||||| |||
+|Azeri (Azerbaijani) |`az`| | ✅| |||||| |||
+|Bulgarian |`bg`| | ✅| |||||| |||
+|Bosnian Latin |`bs`| | ✅| |||||| |||
+|Catalan |`ca`| | ✅| |||||| |||
+|Czech |`cs`| | ✅| |||||| |||
+|Welsh |`cy`| | ✅| |||||| |||
+|Danish |`da`| | ✅| |||||| |||
+|German |`de`| | ✅| |||||| |||
+|Greek |`el`| | ✅| |||||| |||
+|English |`en`|✅ | ✅| ✅|✅|✅|✅|✅|✅|✅|✅|✅|
+|Spanish |`es`|✅ | ✅| ✅|||||| |✅|✅|
+|Estonian |`et`| | ✅| |||||| |||
+|Basque |`eu`| | ✅| |||||| |||
+|Finnish |`fi`| | ✅| |||||| |||
+|French |`fr`| | ✅| |||||| |||
+|Irish |`ga`| | ✅| |||||| |||
+|Galician |`gl`| | ✅| |||||| |||
+|Hebrew |`he`| | ✅| |||||| |||
+|Hindi |`hi`| | ✅| |||||| |||
+|Croatian |`hr`| | ✅| |||||| |||
+|Hungarian |`hu`| | ✅| |||||| |||
+|Indonesian |`id`| | ✅| |||||| |||
+|Italian |`it`| | ✅| |||||| |||
+|Japanese |`ja`|✅ | ✅| ✅|||||| |✅|✅|
+|Kazakh |`kk`| | ✅| |||||| |||
+|Korean |`ko`| | ✅| |||||| |||
+|Lithuanian |`It`| | ✅| |||||| |||
+|Latvian |`Iv`| | ✅| |||||| |||
+|Macedonian |`mk`| | ✅| |||||| |||
+|Malay Malaysia |`ms`| | ✅| |||||| |||
+|Norwegian (Bokmal) |`nb`| | ✅| |||||| |||
+|Dutch |`nl`| | ✅| |||||| |||
+|Polish |`pl`| | ✅| |||||| |||
+|Dari |`prs`| | ✅| |||||| |||
+| Portuguese-Brazil|`pt-BR`| | ✅| |||||| |||
+| Portuguese-Portugal |`pt`/`pt-PT`|✅ | ✅| ✅|||||| |✅|✅|
+|Romanian |`ro`| | ✅| |||||| |||
+|Russian |`ru`| | ✅| |||||| |||
+|Slovak |`sk`| | ✅| |||||| |||
+|Slovenian |`sl`| | ✅| |||||| |||
+|Serbian - Cyrillic RS |`sr-Cryl`| | ✅| |||||| |||
+|Serbian - Latin RS |`sr-Latn`| | ✅| |||||| |||
+|Swedish |`sv`| | ✅| |||||| |||
+|Thai |`th`| | ✅| |||||| |||
+|Turkish |`tr`| | ✅| |||||| |||
+|Ukrainian |`uk`| | ✅| |||||| |||
+|Vietnamese |`vi`| | ✅| |||||| |||
+|Chinese Simplified |`zh`/ `zh-Hans`|✅ | ✅| ✅|||||| |✅|✅|
+|Chinese Traditional |`zh-Hant`| | ✅| |||||| |||
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Computer-vision/whats-new.md
Learn what's new in the service. These items may be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with the service.
+## August 2021
+
+### Image tagging language expansion
+
+The [latest version (v3.2)](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f200) of the Image tagger now supports tags in 50 languages. See the [language support](language-support.md) page for more information.
+ ## May 2021 ### Spatial Analysis container update
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Translator/language-support.md
| Bulgarian | `bg` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Cantonese (Traditional) | `yue` |Γ£ö|Γ£ö|||| | Catalan | `ca` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Chinese (Literary) | `lzh` |Γ£ö|||||
| Chinese Simplified | `zh-Hans` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Chinese Traditional | `zh-Hant` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|| | Croatian | `hr` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
cognitive-services Concept Active Inactive Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/concept-active-inactive-events.md
Title: Active and inactive events - Personalizer description: This article discusses the use of active and inactive events within the Personalizer service.++
+ms.
cognitive-services Concept Active Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/concept-active-learning.md
Title: Learning policy - Personalizer description: Learning settings determine the *hyperparameters* of the model training. Two models of the same data that are trained on different learning settings will end up different.++
+ms.
cognitive-services Concept Apprentice Mode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/concept-apprentice-mode.md
Title: Apprentice mode - Personalizer description: Learn how to use apprentice mode to gain confidence in a model without changing any code.++
+ms.
cognitive-services Concept Auto Optimization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/concept-auto-optimization.md
Title: Auto-optimize - Personalizer description: This article provides a conceptual overview of the auto-optimize feature for Azure Personalizer service.++
+ms.
cognitive-services Concept Feature Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/concept-feature-evaluation.md
Title: Feature evaluation - Personalizer description: When you run an Evaluation in your Personalizer resource from the Azure portal, Personalizer provides information about what features of context and actions are influencing the model. --++
+ms.
cognitive-services Concept Multi Slot Personalization https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/concept-multi-slot-personalization.md
Title: Multi-slot personalization description: Learn where and when to use single-slot and multi-slot personalization with the Personalizer Rank and Reward APIs. -++
cognitive-services Concept Rewards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/concept-rewards.md
Title: Reward score - Personalizer description: The reward score indicates how well the personalization choice, RewardActionID, resulted for the user. The value of the reward score is determined by your business logic, based on observations of user behavior. Personalizer trains its machine learning models by evaluating the rewards.++
+ms.
Last updated 02/20/2020
By adding up reward scores, your final reward may be outside the expected score
Personalizer will correlate the information of a Rank call with the rewards sent in Reward calls to train the model. These may come at different times. Personalizer waits for a limited time, starting when the Rank call happened, even if the Rank call was made as an inactive event, and activated later.
-If the **Reward Wait Time** expires, and there has been no reward information, a default reward is applied to that event for training. The maximum wait duration is 6 days.
+If the **Reward Wait Time** expires, and there has been no reward information, a default reward is applied to that event for training. The maximum wait duration is 2 days. If your scenario requires longer reward wait times (e.g. for marketing email campaigns) we are offering a private preview of longer wait times. Open a support ticket in the Azure portal to get in contact with team and see if you qualify and it can be offered to you.
## Best practices for reward wait time
cognitive-services Concepts Exploration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/concepts-exploration.md
Title: Exploration - Personalizer description: With exploration, Personalizer is able to continue delivering good results, even as user behavior changes. Choosing an exploration setting is a business decision about the proportion of user interactions to explore with, in order to improve the model.--++
+ms.
cognitive-services Concepts Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/concepts-features.md
Title: "Features: Action and context - Personalizer" description: Personalizer uses features, information about actions and context, to make better ranking suggestions. Features can be very generic, or specific to an item.--++
+ms.
cognitive-services Concepts Offline Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/concepts-offline-evaluation.md
Title: Use the Offline Evaluation method - Personalizer description: This article will explain how to use offline evaluation to measure effectiveness of your app and analyze your learning loop.--++
+ms.
cognitive-services Concepts Reinforcement Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/concepts-reinforcement-learning.md
Title: Reinforcement Learning - Personalizer description: Personalizer uses information about actions and current context to make better ranking suggestions. The information about these actions and context are attributes or properties that are referred to as features.--++
+ms.
cognitive-services Concepts Scalability Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/concepts-scalability-performance.md
Title: Scalability and Performance - Personalizer description: "High-performance and high-traffic websites and applications have two main factors to consider with Personalizer for scalability and performance: latency and training throughput."--++
+ms.
cognitive-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/encrypt-data-at-rest.md
Title: Personalizer service encryption of data at rest description: Microsoft offers Microsoft-managed encryption keys, and also lets you manage your Cognitive Services subscriptions with your own keys, called customer-managed keys (CMK). This article covers data encryption at rest for Personalizer, and how to enable and manage CMK. -+ Last updated 08/28/2020-+ #Customer intent: As a user of the Personalizer service, I want to learn how encryption at rest works.
cognitive-services Ethics Responsible Use https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/ethics-responsible-use.md
Title: Ethics and responsible use - Personalizer description: These guidelines are aimed at helping you to implement personalization in a way that helps you build trust in your company and service. Be sure to pause to research, learn and deliberate on the impact of the personalization on people's lives. When in doubt, seek guidance.--++
+ms.
cognitive-services How Personalizer Works https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/how-personalizer-works.md
Title: How Personalizer Works - Personalizer description: The Personalizer _loop_ uses machine learning to build the model that predicts the top action for your content. The model is trained exclusively on your data that you sent to it with the Rank and Reward calls.++
+ms.
cognitive-services How To Create Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/how-to-create-resource.md
Title: Create Personalizer resource description: In this article, learn how to create a personalizer resource in the Azure portal for each feedback loop. ++
+ms.
cognitive-services How To Learning Behavior https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/how-to-learning-behavior.md
Title: Configure learning behavior description: Apprentice mode gives you confidence in the Personalizer service and its machine learning capabilities, and provides metrics that the service is sent information that can be learned from ΓÇô without risking online traffic.++
+ms.
cognitive-services How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/how-to-manage-model.md
Title: Manage model and learning settings - Personalizer description: The machine-learned model and learning settings can be exported for backup in your own source control system.++
+ms.
cognitive-services How To Multi Slot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/how-to-multi-slot.md
Title: How to use multi-slot with Personalizer description: Learn how to use multi-slot with Personalizer to improve content recommendations provided by the service. -++
cognitive-services How To Offline Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/how-to-offline-evaluation.md
Title: How to perform offline evaluation - Personalizer description: This article will show you how to use offline evaluation to measure effectiveness of your app and analyze your learning loop.--++
+ms.
cognitive-services How To Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/how-to-settings.md
Title: Configure Personalizer description: Service configuration includes how the service treats rewards, how often the service explores, how often the model is retrained, and how much data is stored.++
+ms.
cognitive-services Quickstart Personalizer Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/quickstart-personalizer-sdk.md
Title: "Quickstart: Create and use learning loop with SDK - Personalizer" description: This quickstart shows you how to create and manage your knowledge base using the Personalizer client library.++
+ms.
cognitive-services Terminology https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/terminology.md
Title: Terminology - Personalizer description: Personalizer uses terminology from reinforcement learning. These terms are used in the Azure portal and the APIs.++
+ms.
cognitive-services Tutorial Use Azure Notebook Generate Loop Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/tutorial-use-azure-notebook-generate-loop-data.md
Title: "Tutorial: Azure Notebook - Personalizer" description: This tutorial simulates a Personalizer loop _system in an Azure Notebook, which suggests which type of coffee a customer should order. The users and their preferences are stored in a user dataset. Information about the coffee is also available and stored in a coffee dataset.--++
+ms.
cognitive-services Tutorial Use Personalizer Chat Bot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/tutorial-use-personalizer-chat-bot.md
Title: Use Personalizer in chat bot - Personalizer description: Customize a C# .NET chat bot with a Personalizer loop to provide the correct content to a user based on actions (with features) and context features.++
+ms.
cognitive-services Tutorial Use Personalizer Web App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/tutorial-use-personalizer-web-app.md
Title: Use web app - Personalizer description: Customize a C# .NET web app with a Personalizer loop to provide the correct content to a user based on actions (with features) and context features.++
+ms.
cognitive-services What Is Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/what-is-personalizer.md
Title: What is Personalizer? description: Personalizer is a cloud-based service that allows you to choose the best experience to show to your users, learning from their real-time behavior.++
+ms.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/whats-new.md
Title: What's new - Personalizer description: This article contains news about Personalizer.--++
+ms.
cognitive-services Where Can You Use Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/personalizer/where-can-you-use-personalizer.md
Title: Where and how to use - Personalizer description: Personalizer can be applied in any situation where your application can select the right item, action, or product to display - in order to make the experience better, achieve better business results, or improve productivity.++
+ms.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/language-support.md
| Language | Language code | v3.x support | Starting with v3 model version: | Notes | |:|:-:|:-:|:--:|:--:|
-| English | `en` | Γ£ô | API endpoint: 2019-10-01 <br> Container: 2020-04-16 | |
+| English | `en` | Γ£ô | API endpoint: 2020-11-01 <br> Container: 2020-04-16 | |
#### [Personally Identifiable Information (PII)](#tab/pii)
cognitive-services Client Libraries Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/text-analytics/quickstarts/client-libraries-rest-api.md
Previously updated : 08/05/2021 Last updated : 08/25/2021 keywords: text mining, sentiment analysis, text analytics
zone_pivot_groups: programming-languages-text-analytics
# Quickstart: Use the Text Analytics client library and REST API
-Use this article to get started with the Text Analytics client library and REST API. Follow these steps to try out examples code for mining text:
-
-* Sentiment analysis
-* Opinion mining
-* Language detection
-* Entity recognition
-* Personal Identifying Information recognition
-* Key phrase extraction
+Use this article to get started with the Text Analytics client library and REST API. Follow these steps to try out examples code for mining text.
::: zone pivot="programming-language-csharp" > [!IMPORTANT]
-> * The latest stable version of the Text Analytics API is `3.1`.
-> * Be sure to only follow the instructions for the version you are using.
-> * The code in this article uses synchronous methods and un-secured credentials storage for simplicity reasons. For production scenarios, we recommend using the batched asynchronous methods for performance and scalability. See the reference documentation below.
-> * You can also use the latest preview version of the client library to use extractive summarization. See the following samples [on GitHub](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/textanalytics/Azure.AI.TextAnalytics/samples/Sample8_ExtractSummary.md).
-
+> * This quickstart only covers the following versions of the API: v3.1 and v3.2-preview.
[!INCLUDE [C# quickstart](../includes/quickstarts/csharp-sdk.md)]
Use this article to get started with the Text Analytics client library and REST
::: zone pivot="programming-language-python" > [!IMPORTANT]
-> * The latest stable version of the Text Analytics API is `3.1`.
-> * Be sure to only follow the instructions for the version you are using.
-> * The code in this article uses synchronous methods and un-secured credentials storage for simplicity reasons. For production scenarios, we recommend using the batched asynchronous methods for performance and scalability. See the reference documentation below.
-> * You can also use the latest preview version of the client library to use extractive summarization. See the following samples [on GitHub](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/textanalytics/azure-ai-textanalytics/samples/sample_extract_summary.py)
-
+> * This quickstart only covers the following versions of the API: v3.1 and v3.2-preview.
[!INCLUDE [Python quickstart](../includes/quickstarts/python-sdk.md)]
connectors Connectors Integrate Security Operations Create Api Microsoft Graph Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/connectors/connectors-integrate-security-operations-create-api-microsoft-graph-security.md
With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Microsoft
Your logic app's workflow can use actions that get responses from the Microsoft Graph Security connector and make that output available to other actions in your workflow. You can also have other actions in your workflow use the output from the Microsoft Graph Security connector actions. For example, if you get high severity alerts through the Microsoft Graph Security connector, you can send those alerts in an email message by using the Outlook connector.
-To learn more about Microsoft Graph Security, see the [Microsoft Graph Security API overview](/graph/security-concept-overview). If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md). If you're looking for Power Automate or PowerApps, see [What is Power Automate?](https://flow.microsoft.com/) or [What is Power Apps?](https://powerapps.microsoft.com/)
+To learn more about Microsoft Graph Security, see the [Microsoft Graph Security API overview](/graph/security-concept-overview). If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md). If you're looking for Power Automate or Power Apps, see [What is Power Automate?](https://flow.microsoft.com/) or [What is Power Apps?](https://powerapps.microsoft.com/)
## Prerequisites
container-registry Manual Regional Move https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/manual-regional-move.md
While [Azure Resource Mover](../resource-mover/overview.md) can't currently auto
* Use the template to deploy a registry in a different Azure region * Import registry content from the source registry to the target registry - [!INCLUDE [container-registry-geo-replication-include](../../includes/container-registry-geo-replication-include.md)] ## Prerequisites
Azure CLI
## Considerations
-* Use steps in this article to move the registry to a different region in the same subscription. More configuration is needed to move a registry to a different Azure subscription or Active Directory tenant.
-* Exporting and using a Resource Manager template can help re-create many registry settings. You can edit the template to configure additional settings, or update the target registry after creation.
+* Use steps in this article to move the registry to a different region in the same subscription. More configuration may be needed to move a registry to a different Azure subscription in the same Active Directory tenant.
+* Exporting and using a Resource Manager template can help re-create many registry settings. You can edit the template to configure more settings, or update the target registry after creation.
+* Currently, Azure Container Registry doesn't support a registry move to a different Active Directory tenant. This limitation applies to both registries encrypted with a [customer-managed key](container-registry-customer-managed-keys.md) and unencrypted registries.
+* If you are unable to move a registry is outlined in this article, create a new registry, manually recreate settings, and [Import registry content in the target registry](#import-registry-content-in-target-registry).
## Export template from source registry
After creating the registry in the target region, use the [az acr import](/cli/a
* Use the Azure CLI commands [az acr repository list](/cli/azure/acr/repository#az_acr_repository_list) and [az acr repository show-tags](/cli/azure/acr/repository#az_acr_repository_show_tags), or Azure PowerShell equivalents, to help enumerate the contents of your source registry. * Run the import command for individual artifacts, or script it to run over a list of artifacts.
-The following sample Azure CLI script enumerates the source repositories and tags and then imports the artifacts to a target registry. Modify as needed to import specific repositories or tags.
+The following sample Azure CLI script enumerates the source repositories and tags and then imports the artifacts to a target registry in the same Azure subscription. Modify as needed to import specific repositories or tags. To import from a registry in a different subscription or tenant, see examples in [Import container images to a container registry](container-registry-import-images.md).
```azurecli #!/bin/bash
for repo in $REPO_LIST; do
done ``` ++ ## Verify target registry Confirm the following information in your target registry:
data-factory Ci Cd Github Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/ci-cd-github-troubleshoot-guide.md
You set up a TEST pipeline stage where you deploy your developed pipeline. You c
For more help with troubleshooting, try the following resources: * [Data Factory blog](https://azure.microsoft.com/blog/tag/azure-data-factory/)
-* [Data Factory feature requests](https://feedback.azure.com/forums/270578-data-factory)
+* [Data Factory feature requests](/answers/topics/azure-data-factory.html)
* [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory) * [Stack overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory) * [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
data-factory Connector Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-troubleshoot-guide.md
Azure Cosmos DB calculates RUs, see [Request units in Azure Cosmos DB](../cosmos
For more troubleshooting help, try these resources: * [Data Factory blog](https://azure.microsoft.com/blog/tag/azure-data-factory/)
-* [Data Factory feature requests](https://feedback.azure.com/forums/270578-data-factory)
+* [Data Factory feature requests](/answers/topics/azure-data-factory.html)
* [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory) * [Microsoft Q&A page](/answers/topics/azure-data-factory.html) * [Stack Overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)
data-factory Data Factory Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-troubleshoot-guide.md
When you observe that the activity is running much longer than your normal runs
For more troubleshooting help, try these resources: * [Data Factory blog](https://azure.microsoft.com/blog/tag/azure-data-factory/)
-* [Data Factory feature requests](https://feedback.azure.com/forums/270578-data-factory)
+* [Data Factory feature requests](/answers/topics/azure-data-factory.html)
* [Stack Overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory) * [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory) * [Azure videos](https://azure.microsoft.com/resources/videos/index/)
data-factory Data Factory Ux Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-factory-ux-troubleshoot-guide.md
Solution is to fix JSON files at first and then reopen the pipeline using Author
For more troubleshooting help, try these resources: * [Data Factory blog](https://azure.microsoft.com/blog/tag/azure-data-factory/)
-* [Data Factory feature requests](https://feedback.azure.com/forums/270578-data-factory)
+* [Data Factory feature requests](/answers/topics/azure-data-factory.html)
* [Stack Overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory) * [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory) * [Azure videos](https://azure.microsoft.com/resources/videos/index/)
data-factory Data Flow Troubleshoot Connector Format https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-connector-format.md
For more help with troubleshooting, see these resources:
* [Troubleshoot mapping data flows in Azure Data Factory](data-flow-troubleshoot-guide.md) * [Data Factory blog](https://azure.microsoft.com/blog/tag/azure-data-factory/)
-* [Data Factory feature requests](https://feedback.azure.com/forums/270578-data-factory)
+* [Data Factory feature requests](/answers/topics/azure-data-factory.html)
* [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory) * [Stack Overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory) * [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
data-factory Data Flow Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-troubleshoot-guide.md
Specific scenarios that can cause internal server errors are shown as follows.
For more help with troubleshooting, see these resources: * [Data Factory blog](https://azure.microsoft.com/blog/tag/azure-data-factory/)
-* [Data Factory feature requests](https://feedback.azure.com/forums/270578-data-factory)
+* [Data Factory feature requests](/answers/topics/azure-data-factory.html)
* [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory) * [Stack Overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory) * [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
data-factory Pipeline Trigger Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/pipeline-trigger-troubleshoot-guide.md
Known Facts about *ForEach*
* **Concurrency Limit:** If your pipeline has a concurrency policy, verify that there are no old pipeline runs in progress. * **Monitoring limits**: Go to the ADF authoring canvas, select your pipeline, and determine if it has a concurrency property assigned to it. If it does, go to the Monitoring view, and make sure there's nothing in the past 45 days that's in progress. If there is something in progress, you can cancel it and the new pipeline run should start.
-* **Transient Issues:** It is possible that your run was impacted by a transient network issue, credential failures, services outages etc. If this happens, Azure Data Factory has an internal recovery process that monitors all the runs and starts them when it notices something went wrong. You can rerun pipelines and activities as described [here.](https://docs.microsoft.com/azure/data-factory/monitor-visually#rerun-pipelines-and-activities). You can rerun activities if you had canceled activity or had a failure as per [Rerun from activity failures.](https://docs.microsoft.com/azure/data-factory/monitor-visually#rerun-from-failed-activity) This process happens every one hour, so if your run is stuck for more than an hour, create a support case.
+* **Transient Issues:** It is possible that your run was impacted by a transient network issue, credential failures, services outages etc. If this happens, Azure Data Factory has an internal recovery process that monitors all the runs and starts them when it notices something went wrong. You can rerun pipelines and activities as described [here.](monitor-visually.md#rerun-pipelines-and-activities). You can rerun activities if you had canceled activity or had a failure as per [Rerun from activity failures.](monitor-visually.md#rerun-from-failed-activity) This process happens every one hour, so if your run is stuck for more than an hour, create a support case.
You have not optimized mapping data flow.
For more troubleshooting help, try these resources: * [Data Factory blog](https://azure.microsoft.com/blog/tag/azure-data-factory/)
-* [Data Factory feature requests](https://feedback.azure.com/forums/270578-data-factory)
+* [Data Factory feature requests](/answers/topics/azure-data-factory.html)
* [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory) * [Microsoft Q&A question page](/answers/topics/azure-data-factory.html) * [Twitter information about Data Factory](https://twitter.com/hashtag/DataFactory)
data-factory Security And Access Control Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/security-and-access-control-troubleshoot-guide.md
For more help with troubleshooting, try the following resources:
* [Private Link for Data Factory](data-factory-private-link.md) * [Data Factory blog](https://azure.microsoft.com/blog/tag/azure-data-factory/)
-* [Data Factory feature requests](https://feedback.azure.com/forums/270578-data-factory)
+* [Data Factory feature requests](/answers/topics/azure-data-factory.html)
* [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory) * [Microsoft Q&A page](/answers/topics/azure-data-factory.html) * [Stack overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)
data-factory Self Hosted Integration Runtime Proxy Ssis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/self-hosted-integration-runtime-proxy-ssis.md
Finally, you download and install the latest version of self-hosted IR, as well
If you use OLEDB/ODBC/ADO.NET drivers for other database systems, such as PostgreSQL, MySQL, Oracle, and so on, you can download the 64-bit versions from their websites. - If you use data flow components from Azure Feature Pack in your packages, [download and install Azure Feature Pack for SQL Server 2017](https://www.microsoft.com/download/details.aspx?id=54798) on the same machine where your self-hosted IR is installed, if you haven't done so already.-- If you haven't done so already, [download and install the 64-bit version of Visual C++ (VC) runtime](https://support.microsoft.com/en-us/topic/the-latest-supported-visual-c-downloads-2647da03-1eea-4433-9aff-95f26a218cc0) on the same machine where your self-hosted IR is installed.
+- If you haven't done so already, [download and install the 64-bit version of Visual C++ (VC) runtime](https://www.microsoft.com/en-us/download/details.aspx?id=40784) on the same machine where your self-hosted IR is installed.
### Enable Windows authentication for on-premises tasks
data-factory Self Hosted Integration Runtime Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/self-hosted-integration-runtime-troubleshoot-guide.md
If it isn't in the trusted root CA, [download it here](http://cacerts.digicert.c
For more help with troubleshooting, try the following resources: * [Data Factory blog](https://azure.microsoft.com/blog/tag/azure-data-factory/)
-* [Data Factory feature requests](https://feedback.azure.com/forums/270578-data-factory)
+* [Data Factory feature requests](/answers/topics/azure-data-factory.html)
* [Azure videos](https://azure.microsoft.com/resources/videos/index/?sort=newest&services=data-factory) * [Microsoft Q&A page](/answers/topics/azure-data-factory.html) * [Stack overflow forum for Data Factory](https://stackoverflow.com/questions/tagged/azure-data-factory)
data-factory Solution Templates Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/solution-templates-introduction.md
Previously updated : 06/04/2021 Last updated : 08/24/2021 # Templates
You can get started creating a Data Factory pipeline from a template in the foll
![Open the template gallery from the Overview page](media/doc-common-process/home-page-pipeline-templates-tile.png)
-1. On the Author tab in Resource Explorer, select **+**, then **Pipeline from template** to open the template gallery.
+1. On the Author tab in Resource Explorer, select **+**, then select **Pipeline from template** to open the template gallery.
- ![Open the template gallery from the Author tab](media/solution-templates-introduction/templates-intro-image2.png)
+ ![Open the template gallery from the Author tab](media/solution-templates-introduction/templates-introduction-image-2.png)
## Template Gallery
-![The template gallery](media/solution-templates-introduction/templates-intro-image3.png)
+![The template gallery](media/solution-templates-introduction/templates-introduction-image-3.png)
### Out of the box Data Factory templates
Data Factory uses Azure Resource Manager templates for saving data factory pipel
You can also save a pipeline as a template by selecting **Save as template** on the Pipeline tab.
-![Save a pipeline as a template](media/solution-templates-introduction/templates-intro-image4.png)
+![Save a pipeline as a template](media/solution-templates-introduction/templates-introduction-image-4.png)
-You can view pipelines saved as templates in the **My Templates** section of the Template Gallery. You can also see them in the **Templates** section in the Resource Explorer.
+After checking the **My templates** box in the **Template gallery** page, you can view pipelines saved as templates in the right pane of this page.
-![My templates](media/solution-templates-introduction/templates-intro-image5.png)
+![My templates](media/solution-templates-introduction/templates-introduction-image-5.png)
> [!NOTE] > To use the My Templates feature, you have to enable GIT integration. Both Azure DevOps GIT and GitHub are supported.
databox Data Box Logs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-logs.md
Previously updated : 05/10/2021 Last updated : 08/24/2021
The following table gives a summary of each step in processing an import order a
This article describes in detail the various mechanisms or tools available to track and audit Data Box or Data Box Heavy import order. The information in this article applies to both, Data Box and Data Box Heavy import orders. In the subsequent sections, any references to Data Box also apply to Data Box Heavy.
+> [!NOTE]
+> [!INCLUDE [data-box-copy-logs-behind-firewall](../../includes/data-box-copy-logs-behind-firewall.md)]
+ ## Set up access control on the order You can control who can access your order when the order is first created. Set up Azure roles at various scopes to control the access to the Data Box order. An Azure role determines the type of access ΓÇô read-write, read-only, read-write to a subset of operations.
During the data copy to Data Box or Data Box Heavy, an error file is generated i
Make sure that the copy jobs have finished with no errors. If there are errors during the copy process, download the logs from the **Connect and copy** page. - If you copied a file that is not 512 bytes aligned to a managed disk folder on your Data Box, the file isn't uploaded as a page blob to your staging storage account. You will see an error in the logs. Remove the file, and copy a file that is 512 bytes aligned.-- If you copied a VHDX, or a dynamic VHD, or a differencing VHD (these file types are not supported), you will see an error in the logs.
+- If you copied a VHDX, or a dynamic VHD, or a differencing VHD, you will see an error in the logs. Those file types are not supported.
Here is a sample of the *error.xml* for different errors when copying to managed disks.
The copy log path is also displayed on the **Overview** blade for the portal.
![Path to copy log in Overview blade when completed](media/data-box-logs/copy-log-path-1.png)
+> [!NOTE]
+> [!INCLUDE [data-box-copy-logs-behind-firewall](../../includes/data-box-copy-logs-behind-firewall.md)]
+ ### Upload completed successfully The following sample describes the general format of a copy log for a Data Box upload that completed successfully:
databox Data Box Troubleshoot Data Upload https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-troubleshoot-data-upload.md
Previously updated : 05/10/2021 Last updated : 08/24/2021
ddos-protection Ddos Protection Reference Architectures https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/ddos-protection-reference-architectures.md
documentation.
> [!NOTE]
-> Azure App Service Environment for PowerApps or API management in a virtual network with a public IP are both not natively supported.
+> Azure App Service Environment for Power Apps or API management in a virtual network with a public IP are both not natively supported.
## Next steps
digital-twins How To Manage Routes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-routes.md
Here are the supported route filters.
| Spec version | The version of the event schema you are using | `specversion = '<version>'` | The version must be `1.0`. This indicates the CloudEvents schema version 1.0 | | Notification body | Reference any property in the `data` field of a notification | `$body.<property>` | See [Event notifications](concepts-event-notifications.md) for examples of notifications. Any property in the `data` field can be referenced using `$body`
+>[!NOTE]
+> Azure Digital Twins currently doesn't support filtering events based on fields within an array. This includes filtering on properties within a `patch` section of a [digital twin change notification](concepts-event-notifications.md#digital-twin-change-notifications).
+ The following data types are supported as values returned by references to the data above: | Data type | Example |
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-twin.md
Here is an example of JSON Patch code. This document replaces the *mass* and *ra
:::code language="json" source="~/digital-twins-docs-samples/models/patch.json":::
-Update calls for twins and relationships use [JSON Patch](http://jsonpatch.com/) structure. You can create patches using the Azure .NET SDK's [JsonPatchDocument](/dotnet/api/azure.jsonpatchdocument?view=azure-dotnet&preserve-view=true). Here is an example.
+>[!NOTE]
+> This example shows the JSON Patch `replace` operation, which replaces the value of an existing property. For a full list of JSON Patch operations that can be used, including `add` and `remove`, see the [Operations for JSON Patch](http://jsonpatch.com/#operations).
+
+When updating a twin from a code project using the .NET SDK, you can create JSON patches using the Azure .NET SDK's [JsonPatchDocument](/dotnet/api/azure.jsonpatchdocument?view=azure-dotnet&preserve-view=true). Here is an example.
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/twin_operations_other.cs" id="UpdateTwin":::
event-grid Delivery Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/delivery-properties.md
Set a value to an Authorization header to identify the request with your Webhook
Outgoing requests should now contain the header set on the event subscription: ```console
-GET /home.html HTTP/1.1
-
+POST /home.html HTTP/1.1
Host: acme.com
-User-Agent: <user-agent goes here>
- Authorization: BEARER SlAV32hkKG... ```
Authorization: BEARER SlAV32hkKG...
> Defining authorization headers is a sensible option when your destination is a Webhook. It should not be used for [functions subscribed with a resource id](/rest/api/eventgrid/version2020-06-01/eventsubscriptions/createorupdate#azurefunctioneventsubscriptiondestination), Service Bus, Event Hubs, and Hybrid Connections as those destinations support their own authentication schemes when used with Event Grid. ### Service Bus example
-Azure Service Bus supports the use of a [BrokerProperties HTTP header](/rest/api/servicebus/message-headers-and-properties#message-headers) to define message properties when sending single messages. The value of the `BrokerProperties` header should be provided in the JSON format. For example, if you need to set message properties when sending a single message to Service Bus, set the header in the following way:
-
-| Header name | Header type | Header value |
-| :-- | :-- | :-- |
-|`BrokerProperties` | Static | `BrokerProperties: { "MessageId": "{701332E1-B37B-4D29-AA0A-E367906C206E}", "TimeToLive" : 90}` |
+Azure Service Bus supports the use of following message properties when sending single messages.
+
+| Header name | Header type |
+| :-- | :-- |
+| `MessageId` | Dynamic |
+| `PartitionKey` | Static or dynamic |
+| `SessionId` | Static or dynamic |
+| `CorrelationId` | Static or dynamic |
+| `Label` | Static or dynamic |
+| `ReplyTo` | Static or dynamic |
+| `ReplyToSessionId` | Static or dynamic |
+| `To` |Static or dynamic |
+| `ViaPartitionKey` | Static or dynamic |
+> [!NOTE]
+> - The default value of `MessageId` is the internal ID of the Event Grid event. You can override it. For example, `data.field`.
+> - You can only set either `SessionId` or `MessageId`.
### Event Hubs example
-If you need to publish events to a specific partition within an event hub, define a [BrokerProperties HTTP header](/rest/api/eventhub/event-hubs-runtime-rest#common-headers) on your event subscription to specify the partition key that identifies the target event hub partition.
+If you need to publish events to a specific partition within an event hub, set the `ParitionKey` property on your event subscription to specify the partition key that identifies the target event hub partition.
-| Header name | Header type | Header value |
-| :-- | :-- | :-- |
-|`BrokerProperties` | Static | `BrokerProperties: {"PartitionKey": "0000000000-0000-0000-0000-000000000000000"}` |
+| Header name | Header type |
+| :-- | :-- |
+|`PartitionKey` | Static |
### Configure time to live on outgoing events to Azure Storage Queues
event-grid Geo Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/geo-disaster-recovery.md
Title: Geo disaster recovery in Azure Event Grid | Microsoft Docs description: Describes how Azure Event Grid supports geo disaster recovery (GeoDR) automatically. Previously updated : 11/19/2020 Last updated : 08/24/2021 # Server-side geo disaster recovery in Azure Event Grid
-Event Grid now has an automatic geo disaster recovery (GeoDR) of meta-data not only for new, but all existing domains, topics, and event subscriptions. If an entire Azure region goes down, Event Grid will already have all of your event-related infrastructure metadata synced to a paired region. Your new events will begin to flow again with no intervention by you.
+Event Grid supports automatic geo-disaster recovery of metadata for topics, domains, and event subscriptions. Event Grid automatically syncs your event-related infrastructure to a paired region. If an entire Azure region goes down, the events will begin to flow to the geo-paired region with no intervention from you.
+
+Note that event data is not replicated to the paired region. Only the metadata is replicated. If a region supports availability zones, the event data is replicated across availability zones though.
Disaster recovery is measured with two metrics: - Recovery Point Objective (RPO): the minutes or hours of data that may be lost. - Recovery Time Objective (RTO): the minutes or hours the service may be down.
-Event GridΓÇÖs automatic failover has different RPOs and RTOs for your metadata (event subscriptions etc.) and data (events). If you need different specification from the following ones, you can still implement your own [client-side fail over using the topic health apis](custom-disaster-recovery.md).
+Event GridΓÇÖs automatic failover has different RPOs and RTOs for your metadata (topics, domains, event subscriptions.) and data (events). If you need different specification from the following ones, you can still implement your own [client-side fail over using the topic health apis](custom-disaster-recovery.md).
## Recovery point objective (RPO) - **Metadata RPO**: zero minutes. Anytime a resource is created in Event Grid, it's instantly replicated across regions. When a failover occurs, no metadata is lost.
Event GridΓÇÖs automatic failover has different RPOs and RTOs for your metadata
- **Metadata RTO**: Though generally it happens much more quickly, within 60 minutes, Event Grid will begin to accept create/update/delete calls for topics and subscriptions. - **Data RTO**: Like metadata, it generally happens much more quickly, however within 60 minutes, Event Grid will begin accepting new traffic after a regional failover.
-> [!NOTE]
-> The cost for metadata GeoDR on Event Grid is: $0.
+> [!IMPORTANT]
+> - There is no service level agreement (SLA) for server-side disaster recovery. If the paired region has no extra capacity to take on the additional traffic, Event Grid cannot initiate failover. Service level objectives are best-effort only.
+> - The cost for metadata GeoDR on Event Grid is: $0.
## Next steps
expressroute Expressroute Howto Coexist Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-coexist-resource-manager.md
The steps to configure both scenarios are covered in this article. This article
## Limits and limitations * **Only route-based VPN gateway is supported.** You must use a route-based [VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md). You also can use a route-based VPN gateway with a VPN connection configured for 'policy-based traffic selectors' as described in [Connect to multiple policy-based VPN devices](../vpn-gateway/vpn-gateway-connect-multiple-policybased-rm-ps.md).
+* **ExpressRoute-VPN Gateway coexist configurations are not supported on the Basic SKU**.
* **The ASN of Azure VPN Gateway must be set to 65515.** Azure VPN Gateway supports the BGP routing protocol. For ExpressRoute and Azure VPN to work together, you must keep the Autonomous System Number of your Azure VPN gateway at its default value, 65515. If you previously selected an ASN other than 65515 and you change the setting to 65515, you must reset the VPN gateway for the setting to take effect. * **The gateway subnet must be /27 or a shorter prefix**, (such as /26, /25), or you will receive an error message when you add the ExpressRoute virtual network gateway. * **Coexistence in a dual-stack vnet is not supported.** If you are using ExpressRoute IPv6 support and a dual-stack ExpressRoute gateway, coexistence with VPN Gateway will not be possible.
expressroute Expressroute Howto Linkvnet Arm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-linkvnet-arm.md
Register-AzProviderFeature -FeatureName ExpressRouteVnetPeeringGatewayBypass -Pr
``` > [!NOTE]
-> If you already have FathPath configured and want to enroll in the preview feature, you need to do the following:
-> 1. Delete the connection that has FastPath enabled.
-> 1. Enroll in the FathPath preview feature with the Azure PowerShell command above.
-> 1. Recreate the connection with FathPath enabled.
->
+> If you already have FastPath configured and want to enroll in the preview feature, you need to do the following:
+> 1. Enroll in the FastPath preview feature with the Azure PowerShell command above.
+> 1. Disable and then re-enable FastPath on the target connection.
## Clean up resources
expressroute Expressroute Howto Linkvnet Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/expressroute-howto-linkvnet-portal-resource-manager.md
When adding a new connection for your ExpressRoute gateway, select the checkbox
FastPath support for virtual network peering is now in Public preview. Enrollment is only available through Azure PowerShell. See [FastPath preview features](expressroute-howto-linkvnet-arm.md#enroll-in-expressroute-fastpath-features-preview), for instructions on how to enroll. > [!NOTE]
-> If you already have FathPath configured and want to enroll in the preview feature, you need to do the following:
-> 1. Delete the connection that has FastPath enabled.
-> 1. Enroll in the FathPath preview feature with the Azure PowerShell command above.
-> 1. Recreate the connection with FathPath enabled.
->
+> If you already have FastPath configured and want to enroll in the preview feature, you need to do the following:
+> 1. Enroll in the FastPath preview feature with the Azure PowerShell command above.
+> 1. Disable and then re-enable FastPath on the target connection.
## Clean up resources
expressroute Howto Linkvnet Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/expressroute/howto-linkvnet-cli.md
az network vpn-connection update --name ERConnection --resource-group ExpressRou
FastPath support for virtual network peering is now in Public preview. Enrollment is only available through Azure PowerShell. See [FastPath preview features](expressroute-howto-linkvnet-arm.md#enroll-in-expressroute-fastpath-features-preview), for instructions on how to enroll. > [!NOTE]
-> If you already have FathPath configured and want to enroll in the preview feature, you need to do the following:
-> 1. Delete the connection that has FastPath enabled.
-> 1. Enroll in the FathPath preview feature with the Azure PowerShell command above.
-> 1. Recreate the connection with FathPath enabled.
->
+> If you already have FastPath configured and want to enroll in the preview feature, you need to do the following:
+> 1. Enroll in the FastPath preview feature with the Azure PowerShell command above.
+> 1. Disable and then re-enable FastPath on the target connection.
## Clean up resources
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/firewall/overview.md
Azure Firewall has the following known issues:
|Availability zones can only be configured during deployment.|Availability zones can only be configured during deployment. You can't configure Availability Zones after a firewall has been deployed.|This is by design.| |SNAT on inbound connections|In addition to DNAT, connections via the firewall public IP address (inbound) are SNATed to one of the firewall private IPs. This requirement today (also for Active/Active NVAs) to ensure symmetric routing.|To preserve the original source for HTTP/S, consider using [XFF](https://en.wikipedia.org/wiki/X-Forwarded-For) headers. For example, use a service such as [Azure Front Door](../frontdoor/front-door-http-headers-protocol.md#front-door-to-backend) or [Azure Application Gateway](../application-gateway/rewrite-http-headers-url.md) in front of the firewall. You can also add WAF as part of Azure Front Door and chain to the firewall. |SQL FQDN filtering support only in proxy mode (port 1433)|For Azure SQL Database, Azure Synapse Analytics, and Azure SQL Managed Instance:<br><br>SQL FQDN filtering is supported in proxy-mode only (port 1433).<br><br>For Azure SQL IaaS:<br><br>If you're using non-standard ports, you can specify those ports in the application rules.|For SQL in redirect mode (the default if connecting from within Azure), you can instead filter access using the SQL service tag as part of Azure Firewall network rules.
-|Outbound SMTP traffic on TCP port 25 is blocked|Outbound email messages that are sent directly to external domains (like `outlook.com` and `gmail.com`) on TCP port 25 are blocked by Azure Firewall. This is the default platform behavior in Azure. |Use authenticated SMTP relay services, which typically connect through TCP port 587, but also supports other ports. For more information, see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md).
+|Outbound SMTP traffic on TCP port 25 is blocked|Outbound email messages that are sent directly to external domains (like `outlook.com` and `gmail.com`) on TCP port 25 are blocked by Azure Firewall. This is the default platform behavior in Azure. |Use authenticated SMTP relay services, which typically connect through TCP port 587, but also supports other ports. For more information, see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md). Currently, Azure Firewall may be able to communicate to public IPs by using outbound TCP 25, but it's not guaranteed to work, and it's not supported for all subscription types. For private IPs like virtual networks, VPNs, and Azure ExpressRoute, Azure Firewall supports an outbound connection of TCP port 25.
|SNAT port exhaustion|Azure Firewall currently supports 1024 ports per Public IP address per backend virtual machine scale set instance. By default, there are two virtual machine scale set instances.|This is an SLB limitation and we are constantly looking for opportunities to increase the limits. In the meantime, it is recommended to configure Azure Firewall deployments with a minimum of five public IP addresses for deployments susceptible to SNAT exhaustion. This increases the SNAT ports available by five times. Allocate from an IP address prefix to simplify downstream permissions.| |DNAT isn't supported with Forced Tunneling enabled|Firewalls deployed with Forced Tunneling enabled can't support inbound access from the Internet because of asymmetric routing.|This is by design because of asymmetric routing. The return path for inbound connections goes via the on-premises firewall, which hasn't seen the connection established. |Outbound Passive FTP may not work for Firewalls with multiple public IP addresses, depending on your FTP server configuration.|Passive FTP establishes different connections for control and data channels. When a Firewall with multiple public IP addresses sends data outbound, it randomly selects one of its public IP addresses for the source IP address. FTP may fail when data and control channels use different source IP addresses, depending on your FTP server configuration.|An explicit SNAT configuration is planned. In the meantime, you can configure your FTP server to accept data and control channels from different source IP addresses (see [an example for IIS](/iis/configuration/system.applicationhost/sites/sitedefaults/ftpserver/security/datachannelsecurity)). Alternatively, consider using a single IP address in this situation.|
fxt-edge-filer Configure Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/fxt-edge-filer/configure-network.md
For optimal performance, configure your DNS server to handle client-facing clust
A cluster vserver is shown on the left, and IP addresses appear in the center and on the right. Configure each client access point with A records and pointers as illustrated.
-![Cluster round-robin DNS diagram - detailed alt text link follows image](media/fxt-cluster-config/fxt-rrdns-diagram.png)
-[detailed text description](https://azure.github.io/Avere/legacy/Azure-FXT-EdgeFilerDNSconfiguration-alt-text.html)
+ <The diagram shows connections among three categories of elements: a single vserver (at the left), three IP addresses (middle column), and three client interfaces (right column). A single circle at the left labeled "vserver1" is connected by arrows pointing toward three circles labeled with IP addresses: 10.0.0.10, 10.0.0.11, and 10.0.0.12. The arrows from the vserver circle to the three IP circles have the caption "A". Each of the IP address circles is connected by two arrows to a circle labeled as a client interface - the circle with IP 10.0.0.10 is connected to "vs1-client-IP-10", the circle with IP 10.0.0.11 is connected to "vs1-client-IP-11", and the circle with IP 10.0.0.12 is connected to "vs1-client-IP-11". The connections between the IP address circles and the client interface circles are two arrows: one arrow labeled "PTR" that points from the IP address circle to the client interface circle, and one arrow labeled "A" that points from the client interface circle to the IP address circle.>
Each client-facing IP address must have a unique name for internal use by the cluster. (In this diagram, the client IPs are named vs1-client-IP-* for clarity, but in production you should probably use something more concise, like client*.)
governance Guest Configuration Custom https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/governance/policy/concepts/guest-configuration-custom.md
the state of the machine.
1. Last, the provider runs `Get` to return the current state of each setting so details are available both about why a machine isn't compliant and to confirm that the current state is compliant.
+
+## Trigger Set from outside machine
+
+A challenge in previous versions of DSC has been correcting drift at scale
+without a lot of custom code and reliance on WinRM remote connections. Guest
+configuration solves this problem. Users of guest configuration have control
+over drift correction through
+[Remediation On Demand](/guest-configuration-policy-effects.md#remediation-on-demand-applyandmonitor).
## Special requirements for Get
progressing.
available in the public preview release of guest configuration, including, the `$global:DSCMachineStatus` isn't available. Configurations aren't able to reboot a node during or at the end of a configuration.
+## Known compatibility issues with supported modules
+
+The `PsDscResources` module in the PowerShell Gallery and the `PSDesiredStateConfiguration`
+module that ships with Windows are supported by Microsoft and have been a commonly-used
+set of resources for DSC. Until the `PSDscResources` module is updated for DSCv3, be aware of the
+following known compatibility issues.
+
+- Do not use resources from the `PSDesiredStateConfiguration` module that ships with Windows. Instead,
+ switch to `PSDscResources`.
+- Do not use the `WindowsFeature` and `WindowsFeatureSet` resources in `PsDscResources`. Instead,
+ switch to the `WindowsOptionalFeature` and `WindowsOptionalFeatureSet` resources.
+ ## Coexistance with DSC version 3 and previous versions DSC version 3 in guest configuration can coexist with older versions installed in
import-export Storage Import Export View Drive Status https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/import-export/storage-import-export-view-drive-status.md
Previously updated : 03/04/2021 Last updated : 08/24/2021
You see one of the following job statuses depending on where your drive is in th
|: |: | | Creating | After a job is created, its state is set to **Creating**. While the job is in the **Creating** state, the Import/Export service assumes the drives haven't been shipped to the data center. A job may remain in this state for up to two weeks, after which it's automatically deleted by the service. | | Shipping | After you ship your package, you should update the tracking information in the Azure portal. Doing so turns the job into **Shipping** state. The job remains in the **Shipping** state for up to two weeks.
-| Received | After all drives are received at the data center, the job state is set to **Received**. |
+| Received | After all drives are received at the data center, the job state is set to **Received**.</br>The job status may change 1 to 3 business days after the carrier delivers the device, when order processing completes in the datacenter. |
| Transferring | Once at least one drive has begun processing, the job state is set to **Transferring**. For more information, go to [Drive States](#view-drive-status). | | Packaging | After all drives have completed processing, the job is placed in **Packaging** state until the drives are shipped back to you. | | Completed | After all drives are shipped back to you, if the job has completed without errors, then the job is set to **Completed**. The job is automatically deleted after 90 days in the **Completed** state. |
iot-central Howto Export Data Legacy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-export-data-legacy.md
This example snapshot shows a message that contains device and properties data i
If you have an existing data export in your preview application with the *Devices* and *Device templates* streams turned on, update your export by **30 June 2020**. This requirement applies to exports to Azure Blob storage, Azure Event Hubs, and Azure Service Bus.
-Starting 3 February 2020, all new exports in applications with Devices and Device templates enabled will have the data format described above. All exports created before this date remain on the old data format until 30 June 2020, at which time these exports will automatically be migrated to the new data format. The new data format matches the [device](/rest/api/iotcentral/2021-04-30preview/devices/get), [device property](/rest/api/iotcentral/2021-04-30preview/devices/getproperties), [device cloud property](/rest/api/iotcentral/2021-04-30preview/devices/getcloudproperties), and [device template](/rest/api/iotcentral/2021-04-30preview/devicetemplates/get) objects in the IoT Central public API.
+Starting 3 February 2020, all new exports in applications with Devices and Device templates enabled will have the data format described above. All exports created before this date remain on the old data format until 30 June 2020, at which time these exports will automatically be migrated to the new data format. The new data format matches the [device](/rest/api/iotcentral/1.0/devices/get), [device property](/rest/api/iotcentral/1.0/devices/get-properties), and [device template](/rest/api/iotcentral/1.0/device-templates/get) objects in the IoT Central public API.
For **Devices**, notable differences between the old data format and the new data format include: - `@id` for device is removed, `deviceId` is renamed to `id`
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-manage-device-certificates.md
description: Create test certificates, install, and manage them on an Azure IoT
Previously updated : 03/01/2021 Last updated : 08/24/2021
To see an example of these certificates, review the scripts that create demo cer
Install your certificate chain on the IoT Edge device and configure the IoT Edge runtime to reference the new certificates.
-Copy the three certificate and key files onto your IoT Edge device. You can use a service like [Azure Key Vault](../key-vault/index.yml) or a function like [Secure copy protocol](https://www.ssh.com/ssh/scp/) to move the certificate files. If you generated the certificates on the IoT Edge device itself, you can skip this step and use the path to the working directory.
+Copy the three certificate and key files onto your IoT Edge device.
-If you are using IoT Edge for Linux on Windows, you need to use the SSH key located in the Azure IoT Edge `id_rsa` file to authenticate file transfers between the host OS and the Linux virtual machine. You can do an authenticated SCP using the following command:
-
- ```powershell-interactive
- C:\WINDOWS\System32\OpenSSH\scp.exe -i 'C:\Program Files\Azure IoT Edge\id_rsa' <PATH_TO_SOURCE_FILE> iotedge-user@<VM_IP>:<PATH_TO_FILE_DESTINATION>
- ```
-
- >[!NOTE]
- >The Linux virtual machine's IP address can be queried via the `Get-EflowVmAddr` command.
-
-If you used the sample scripts to [Create demo certificates](how-to-create-test-certificates.md), copy the following files onto your IoT-Edge device:
+If you used the sample scripts to [create demo certificates](how-to-create-test-certificates.md), the three certificate and key files are located at the following paths:
* Device CA certificate: `<WRKDIR>\certs\iot-edge-device-MyEdgeDeviceCA-full-chain.cert.pem` * Device CA private key: `<WRKDIR>\private\iot-edge-device-MyEdgeDeviceCA.key.pem` * Root CA: `<WRKDIR>\certs\azure-iot-test-only.root.ca.cert.pem`
+You can use a service like [Azure Key Vault](../key-vault/index.yml) or a function like [Secure copy protocol](https://www.ssh.com/ssh/scp/) to move the certificate files. If you generated the certificates on the IoT Edge device itself, you can skip this step and use the path to the working directory.
+
+If you are using IoT Edge for Linux on Windows, you need to use the SSH key located in the Azure IoT Edge `id_rsa` file to authenticate file transfers between the host OS and the Linux virtual machine. Retrieve the Linux virtual machine's IP address using the `Get-EflowVmAddr` command. Then, you can do an authenticated SCP using the following command:
+
+ ```powershell
+ C:\WINDOWS\System32\OpenSSH\scp.exe -i 'C:\Program Files\Azure IoT Edge\id_rsa' <PATH_TO_SOURCE_FILE> iotedge-user@<VM_IP>:<PATH_TO_FILE_DESTINATION>
+ ```
+
+### Configure IoT Edge with the new certificates
+ <!-- 1.1 --> :::moniker range="iotedge-2018-06"
-1. Open the IoT Edge security daemon config file.
+# [Linux containers](#tab/linux)
- * Linux and IoT Edge for Linux on Windows: `/etc/iotedge/config.yaml`
-
- * Windows using Windows containers: `C:\ProgramData\iotedge\config.yaml`
+1. Open the IoT Edge security daemon config file: `/etc/iotedge/config.yaml`
1. Set the **certificate** properties in config.yaml to the file URI path to the certificate and key files on the IoT Edge device. Remove the `#` character before the certificate properties to uncomment the four lines. Make sure the **certificates:** line has no preceding whitespace and that nested items are indented by two spaces. For example:
- * Linux and IoT Edge for Linux on Windows:
+ ```yaml
+ certificates:
+ device_ca_cert: "file:///<path>/<device CA cert>"
+ device_ca_pk: "file:///<path>/<device CA key>"
+ trusted_ca_certs: "file:///<path>/<root CA cert>"
+ ```
+
+1. Make sure that the user **iotedge** has read permissions for the directory holding the certificates.
+
+1. If you've used any other certificates for IoT Edge on the device before, delete the files in the following two directories before starting or restarting IoT Edge:
+
+ * `/var/lib/iotedge/hsm/certs`
+ * `/var/lib/iotedge/hsm/cert_keys`
+
+1. Restart IoT Edge.
+
+ ```bash
+ sudo iotedge system restart
+ ```
- ```yaml
- certificates:
- device_ca_cert: "file:///<path>/<device CA cert>"
- device_ca_pk: "file:///<path>/<device CA key>"
- trusted_ca_certs: "file:///<path>/<root CA cert>"
- ```
+# [Windows containers](#tab/windows)
- * Windows using Windows containers:
+1. Open the IoT Edge security daemon config file: `C:\ProgramData\iotedge\config.yaml`
- ```yaml
- certificates:
- device_ca_cert: "file:///C:/<path>/<device CA cert>"
- device_ca_pk: "file:///C:/<path>/<device CA key>"
- trusted_ca_certs: "file:///C:/<path>/<root CA cert>"
- ```
+1. Set the **certificate** properties in config.yaml to the file URI path to the certificate and key files on the IoT Edge device. Remove the `#` character before the certificate properties to uncomment the four lines. Make sure the **certificates:** line has no preceding whitespace and that nested items are indented by two spaces. For example:
-1. On Linux devices, make sure that the user **iotedge** has read permissions for the directory holding the certificates.
+ ```yaml
+ certificates:
+ device_ca_cert: "file:///C:/<path>/<device CA cert>"
+ device_ca_pk: "file:///C:/<path>/<device CA key>"
+ trusted_ca_certs: "file:///C:/<path>/<root CA cert>"
+ ```
1. If you've used any other certificates for IoT Edge on the device before, delete the files in the following two directories before starting or restarting IoT Edge:
- * Linux and IoT Edge for Linux on Windows: `/var/lib/iotedge/hsm/certs` and `/var/lib/iotedge/hsm/cert_keys`
+ * `C:\ProgramData\iotedge\hsm\certs`
+ * `C:\ProgramData\iotedge\hsm\cert_keys`
- * Windows using Windows containers: `C:\ProgramData\iotedge\hsm\certs` and `C:\ProgramData\iotedge\hsm\cert_keys`
+1. Restart IoT Edge.
+
+ ```powershell
+ Restart-Service iotedge
+ ```
+ :::moniker-end <!-- end 1.1 -->
If you used the sample scripts to [Create demo certificates](how-to-create-test-
pk = "file:///<path>/<device CA key>" ```
-1. Make sure that the user **iotedge** has read permissions for the directory holding the certificates.
+1. Make sure that the service has read permissions for the directories holding the certificates and keys.
-1. If you've used any other certificates for IoT Edge on the device before, delete the files in the following two directories before starting or restarting IoT Edge:
+ * The private key file should be owned by the **aziotks** group.
+ * The certificate files should be owned by the **aziotcs** group.
+
+ >[!TIP]
+ >If your certificate is read-only, meaning you created it and don't want the IoT Edge service to rotate it, set the private key file to mode 0440 and the certificate file to mode 0444. If you created the initial files and then configured the cert service to rotate the certificate in the future, set the private key file to mode 0660 and the certificate file to mode 0664.
+
+1. If you've used any other certificates for IoT Edge on the device before, delete the files in the following directory. IoT Edge will recreate them with the new CA certificate you provided.
* `/var/lib/aziot/certd/certs`
- * `/var/lib/aziot/keyd/keys`
+
+1. Apply the configuration changes.
+
+ ```bash
+ sudo iotedge config apply
+ ```
:::moniker-end <!-- end 1.2 -->
-<!-- 1.1. -->
-<!-- Temporarily, customizable certificate lifetime not available in 1.2. Update before GA. -->
- ## Customize certificate lifetime IoT Edge automatically generates certificates on the device in several cases, including:
IoT Edge automatically generates certificates on the device in several cases, in
For more information about the function of the different certificates on an IoT Edge device, see [Understand how Azure IoT Edge uses certificates](iot-edge-certs.md).
-For these two automatically generated certificates, you have the option of setting the **auto_generated_ca_lifetime_days** flag in the config file to configure the number of days for the lifetime of the certificates.
+For these two automatically generated certificates, you have the option of setting a flag in the config file to configure the number of days for the lifetime of the certificates.
>[!NOTE]
->There is a third auto-generated certificate that the IoT Edge security manager creates, the **IoT Edge hub server certificate**. This certificate always has a 90 day lifetime, but is automatically renewed before expiring. The **auto_generated_ca_lifetime_days** value doesn't affect this certificate.
+>There is a third auto-generated certificate that the IoT Edge security manager creates, the **IoT Edge hub server certificate**. This certificate always has a 90 day lifetime, but is automatically renewed before expiring. The auto-generated CA lifetime value set in the config file doesn't affect this certificate.
Upon expiry after the specified number of days, IoT Edge has to be restarted to regenerate the device CA certificate. The device CA certificate won't be renewed automatically.
+<!-- 1.1. -->
+
+# [Linux containers](#tab/linux)
+ 1. To configure the certificate expiration to something other than the default 90 days, add the value in days to the **certificates** section of the config file. ```yaml
Upon expiry after the specified number of days, IoT Edge has to be restarted to
1. Delete the contents of the `hsm` folder to remove any previously generated certificates.
- * Linux and IoT Edge for Linux on Windows: `/var/lib/iotedge/hsm/certs` and `/var/lib/iotedge/hsm/cert_keys`
-
- * Windows using Windows containers: `C:\ProgramData\iotedge\hsm\certs` and `C:\ProgramData\iotedge\hsm\cert_keys`
+ * `/var/lib/iotedge/hsm/certs`
+ * `/var/lib/iotedge/hsm/cert_keys`
1. Restart the IoT Edge service.
- * Linux and IoT Edge for Linux on Windows:
- ```bash sudo systemctl restart iotedge ```
- * Windows using Windows containers:
+1. Confirm the lifetime setting.
- ```powershell
- Restart-Service iotedge
+ ```bash
+ sudo iotedge check --verbose
```
-1. Confirm the lifetime setting.
+ Check the output of the **production readiness: certificates** check, which lists the number of days until the automatically generated device CA certificates expire.
- * Linux and IoT Edge for Linux on Windows:
+# [Windows containers](#tab/windows)
- ```bash
- sudo iotedge check --verbose
+1. To configure the certificate expiration to something other than the default 90 days, add the value in days to the **certificates** section of the config file.
+
+ ```yaml
+ certificates:
+ device_ca_cert: "<ADD URI TO DEVICE CA CERTIFICATE HERE>"
+ device_ca_pk: "<ADD URI TO DEVICE CA PRIVATE KEY HERE>"
+ trusted_ca_certs: "<ADD URI TO TRUSTED CA CERTIFICATES HERE>"
+ auto_generated_ca_lifetime_days: <value>
```
- * Windows using Windows containers:
+ > [!NOTE]
+ > Currently, a limitation in libiothsm prevents the use of certificates that expire on or after January 1, 2038.
+
+1. Delete the contents of the `hsm` folder to remove any previously generated certificates.
+
+ * `C:\ProgramData\iotedge\hsm\certs`
+ * `C:\ProgramData\iotedge\hsm\cert_keys`
+
+1. Restart the IoT Edge service.
+
+ ```powershell
+ Restart-Service iotedge
+ ```
+
+1. Confirm the lifetime setting.
```powershell iotedge check --verbose
Upon expiry after the specified number of days, IoT Edge has to be restarted to
Check the output of the **production readiness: certificates** check, which lists the number of days until the automatically generated device CA certificates expire. ++ :::moniker-end <!-- end 1.1 -->
-<!--
-<!-- 1.2 --
+<!-- 1.2 -->
:::moniker range=">=iotedge-2020-11"
-1. To configure the certificate expiration to something other than the default 90 days, add the value in days to the **certificates** section of the config file.
+1. To configure the certificate expiration to something other than the default 90 days, add the value in days to the **Edge CA certificate (Quickstart)** section of the config file.
```toml
- [certificates]
- device_ca_cert = "<ADD URI TO DEVICE CA CERTIFICATE HERE>"
- device_ca_pk = "<ADD URI TO DEVICE CA PRIVATE KEY HERE>"
- trusted_ca_certs = "<ADD URI TO TRUSTED CA CERTIFICATES HERE>"
- auto_generated_ca_lifetime_days = <value>
+ [edge_ca]
+ auto_generated_edge_ca_expiry_days = <value>
``` 1. Delete the contents of the `certd` and `keyd` folders to remove any previously generated certificates: `/var/lib/aziot/certd/certs` `/var/lib/aziot/keyd/keys`
-1. Restart IoT Edge.
+1. Apply the configuration changes.
```bash
- sudo iotedge system restart
+ sudo iotedge config apply
``` 1. Confirm the new lifetime setting.
Upon expiry after the specified number of days, IoT Edge has to be restarted to
Check the output of the **production readiness: certificates** check, which lists the number of days until the automatically generated device CA certificates expire. :::moniker-end
-<!-- end 1.2 --
>
+<!-- end 1.2 -->
## Next steps
iot-edge How To Visual Studio Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/how-to-visual-studio-develop-module.md
After your Visual Studio 2019 is ready, you also need the following tools and co
* Download and install [Docker Community Edition](https://docs.docker.com/install/) on your development machine to build and run your module images. You'll need to set Docker CE to run in either Linux container mode or Windows container mode, depending on the type of modules you are developing.
-* Set up your local development environment to debug, run, and test your IoT Edge solution by installing the [Azure IoT EdgeHub Dev Tool](https://pypi.org/project/iotedgehubdev/). Install [Python (2.7/3.6+) and Pip](https://www.python.org/) and then install the **iotedgehubdev** package by running the following command in your terminal. Make sure your Azure IoT EdgeHub Dev Tool version is greater than 0.3.0.
+* Set up your local development environment to debug, run, and test your IoT Edge solution by installing the [Azure IoT EdgeHub Dev Tool](https://pypi.org/project/iotedgehubdev/). Install [Python (3.5/3.6/3.7/3.8) and Pip](https://www.python.org/) and then install the **iotedgehubdev** package by running the following command in your terminal. Make sure your Azure IoT EdgeHub Dev Tool version is greater than 0.3.0.
```cmd pip install --upgrade iotedgehubdev
iot-hub-device-update Import Schema https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/import-schema.md
Authorization URL: https://login.microsoftonline.com/common/oauth2/authorize
| Name | Description | | | |
-| https://api.adu.microsoft.com/user_impersonation | Impersonate your user account |
-| https://api.adu.microsoft.com/.default | Client credential flows |
+| `https://api.adu.microsoft.com/user_impersonation` | Impersonate your user account |
+| `https://api.adu.microsoft.com/.default` | Client credential flows |
**Permissions**
iot-hub Iot Hub Automatic Device Management Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-automatic-device-management-cli.md
Automatic module configurations require the use of module twins to synchronize s
## Use tags to target twins
-Before you create a configuration, you must specify which devices or modules you want to affect. Azure IoT Hub identifies devices and using tags in the device twin, and identifies modules using tags in the module twin. Each device or modules can have multiple tags, and you can define them any way that makes sense for your solution. For example, if you manage devices in different locations, add the following tags to a device twin:
+Before you create a configuration, you must specify which devices or modules you want to affect. Azure IoT Hub identifies devices and using tags in the device twin, and identifies modules using tags in the module twin. Each device or module can have multiple tags, and you can define them any way that makes sense for your solution. For example, if you manage devices in different locations, add the following tags to a device twin:
```json "tags": {
Before you create a configuration, you must specify which devices or modules you
## Define the target content and metrics
-The target content and metric queries are specified as JSON documents that describe the device twin or module twin desired properties to set and reported properties to measure. To create an automatic configuration using Azure CLI, save the target content and metrics locally as .txt files. You use the file paths in a later section when you run the command to apply the configuration to your device.
+The target content and metric queries are specified as JSON documents that describe the device twin or module twin desired properties to set and reported properties to measure. To create an automatic configuration using Azure CLI, save the target content and metrics locally as .txt files. You use the file paths in a later section when you run the command to apply the configuration to your device.
Here's a basic target content sample for an automatic device configuration:
iot-hub Iot Hub Dev Guide Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-dev-guide-azure-ad-rbac.md
Previously updated : 06/24/2021 Last updated : 08/24/2021
Azure IoT Hub supports using Azure Active Directory (AAD) to authenticate requests to its service APIs like create device identity or invoke direct method. Also, IoT Hub supports authorization of the same service APIs with Azure role-based access control (Azure RBAC). Together, you can grant permissions to access IoT Hub's service APIs to an AAD security principal, which could be a user, group, or application service principal.
-Authenticating access with Azure AD and controlling permissions with Azure RBAC provides superior security and ease of use over [security tokens](iot-hub-dev-guide-sas.md). To minimize potential security vulnerabilities inherent in security tokens, Microsoft recommends using Azure AD with your IoT hub whenever possible.
+Authenticating access with Azure AD and controlling permissions with Azure RBAC provides superior security and ease of use over [security tokens](iot-hub-dev-guide-sas.md). To minimize potential security vulnerabilities inherent in security tokens, Microsoft recommends [using Azure AD with your IoT hub whenever possible](#azure-ad-access-and-shared-access-policies).
> [!NOTE] > Authenticating with Azure AD isn't supported for IoT Hub's *device APIs* (like device-to-cloud messages and update reported properties). Use [symmetric keys](iot-hub-dev-guide-sas.md#use-a-symmetric-key-in-the-identity-registry) or [X.509](iot-hub-x509ca-overview.md) to authenticate devices to IoT hub.
The following tables describe the permissions available for IoT Hub service API
> [!NOTE] > To get data from IoT Hub using Azure AD, [set up routing to a separate Event Hub](iot-hub-devguide-messages-d2c.md#event-hubs-as-a-routing-endpoint). To access the [the built-in Event Hub compatible endpoint](iot-hub-devguide-messages-read-builtin.md), use the connection string (shared access key) method as before.
-## Azure AD access from Azure portal
+## Azure AD access and shared access policies
-When you try to access IoT Hub, the Azure portal first checks whether you've been assigned an Azure role with **Microsoft.Devices/iotHubs/listkeys/action**. If so, then Azure portal uses the keys from shared access policies for accessing IoT Hub. If not, Azure portal tries to access data using your Azure AD account.
+By default, IoT Hub supports service API access through both Azure AD as well as [shared access policies and security tokens](iot-hub-dev-guide-sas.md). To minimize potential security vulnerabilities inherent in security tokens, disable access with shared access policies:
-To access IoT Hub from Azure portal using your Azure AD account, you need permissions to access the IoT hub data resources (like devices and twins), and you also need permissions to navigate to the IoT hub resource in the Azure portal. The built-in roles provided by IoT Hub grant access to resources like devices and twin, but they don't grant access to the IoT Hub resource. So, access to the portal also requires assignment of an Azure Resource Manager (ARM) role like [Reader](../role-based-access-control/built-in-roles.md#reader). The Reader role is a good choice because it's the most restricted role that lets you navigate the portal, and it doesn't include the **Microsoft.Devices/iotHubs/listkeys/action** permission (which gives access to all IoT Hub data resources via shared access policies).
+1. Ensure that your service clients and users have [sufficient access](#manage-access-to-iot-hub-using-azure-rbac-role-assignment) to your IoT hub following [principle of least privilege](../security/fundamentals/identity-management-best-practices.md).
+1. In the [Azure portal](https://portal.azure.com), go to your IoT hub.
+1. On the left, select **Shared access policies**.
+1. Under **Connect using shared access policies**, select **Deny**.
+ :::image type="content" source="media/iot-hub-dev-guide-azure-ad-rbac/disable-local-auth.png" alt-text="Screenshot of Azure portal showing how to turn off IoT Hub shared access policies":::
+1. Review the warning, then select **Save**.
+
+Your IoT hub service APIs can now only be access using Azure AD and RBAC.
+
+## Azure AD access from the Azure portal
+
+When you try to access IoT Hub, the Azure portal first checks whether you've been assigned an Azure role with **Microsoft.Devices/iotHubs/listkeys/action**. If so, then the Azure portal uses the keys from shared access policies for accessing IoT Hub. If not, the Azure portal tries to access data using your Azure AD account.
+
+To access IoT Hub from the Azure portal using your Azure AD account, you need permissions to access the IoT hub data resources (like devices and twins), and you also need permissions to navigate to the IoT hub resource in the Azure portal. The built-in roles provided by IoT Hub grant access to resources like devices and twin, but they don't grant access to the IoT Hub resource. So, access to the portal also requires assignment of an Azure Resource Manager (ARM) role like [Reader](../role-based-access-control/built-in-roles.md#reader). The Reader role is a good choice because it's the most restricted role that lets you navigate the portal, and it doesn't include the **Microsoft.Devices/iotHubs/listkeys/action** permission (which gives access to all IoT Hub data resources via shared access policies).
To ensure an account doesn't have access outside of assigned permissions, *don't* include the **Microsoft.Devices/iotHubs/listkeys/action** permission when creating a custom role. For example, to create a custom role that could read device identities, but cannot create or delete devices, create a custom role that: - Has the **Microsoft.Devices/IotHubs/devices/read** data action
Most commands against IoT Hub support Azure AD authentication. The type of auth
- When `--auth-type` has the value of `key`, like before the CLI automatically discovers a suitable policy when interacting with IoT Hub. -- When `--auth-type` has the value `login`, an access token from the Azure CLI logged in principal is used for the operation.
+- When `--auth-type` has the value `login`, an access token from the Azure CLI logged in the principal is used for the operation.
To learn more, see the [Azure IoT extension for Azure CLI release page](https://github.com/Azure/azure-iot-cli-extension/releases/tag/v0.10.12)
iot-hub Iot Hub Dev Guide Sas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-dev-guide-sas.md
Previously updated : 04/21/2021 Last updated : 08/24/2021
The result, which would grant access to read all device identities, would be:
You can use any X.509 certificate to authenticate a device with IoT Hub by uploading either a certificate thumbprint or a certificate authority (CA) to Azure IoT Hub. To learn more, see [Device Authentication using X.509 CA Certificates](iot-hub-x509ca-overview.md). For information about how to upload and verify a certificate authority with your IoT hub, see [Set up X.509 security in your Azure IoT hub](./tutorial-x509-scripts.md).
+### Enforcing X.509 authentication
+
+For additional security, an IoT hub can be configured to not allow SAS authentication for devices and modules, leaving X.509 as the only accepted authentication option. Currently, this feature isn't available in Azure portal. To configure, set `disableDeviceSAS` and `disableModuleSAS` to `true` on the IoT Hub resource properties:
+
+```azurecli-interactive
+az resource update -n <iothubName> -g <resourceGroupName> --resource-type Microsoft.Devices/IotHubs --set properties.disableDeviceSAS=true properties.disableModuleSAS=true
+```
+ ### Use SAS tokens as a device There are two ways to obtain **DeviceConnect** permissions with IoT Hub with security tokens: use a [symmetric device key from the identity registry](#use-a-symmetric-key-in-the-identity-registry), or use a [shared access key](#use-a-shared-access-policy-to-access-on-behalf-of-a-device).
iot-hub Iot Hub Devguide Quotas Throttling https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-quotas-throttling.md
IoT Hub enforces other operational limits:
| IoT Edge automatic deployments<sup>1</sup> | 50 modules per deployment. 100 deployments (including layered deployments) per paid SKU hub. 10 deployments per free SKU hub. | | Twins<sup>1</sup> | Maximum size of desired properties and reported properties sections are 32 KB each. Maximum size of tags section is 8 KB. | | Shared access policies | Maximum number of shared access policies is 16. |
+| Restrict outbound network access | Maximum number of allowed FQDNs is 20. |
| x509 CA certificates | Maximum number of x509 CA certificates that can be registered on IoT Hub is 25. | <sup>1</sup>This feature is not available in the basic tier of IoT Hub. For more information, see [How to choose the right IoT Hub](iot-hub-scaling.md).
iot-hub Iot Hub Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-managed-identity.md
Previously updated : 05/11/2021 Last updated : 08/24/2021
IoT Hub's [file upload](iot-hub-devguide-file-upload.md) feature allows devices
> You need to complete above steps to assign the managed identity the right access before saving the storage account in IoT Hub for file upload using the managed identity. Please wait a few minutes for the role assignment to propagate. 5. On your IoT hub's resource page, navigate to **File upload** tab.
-6. On the page that shows up, select the container that you intend to use in your blob storage, configure the **File notification settings, SAS TTL, Default TTL, and Maximum delivery count** as desired. Choose the preferred authentication type, and click **Save**.
+6. On the page that shows up, select the container that you intend to use in your blob storage, configure the **File notification settings, SAS TTL, Default TTL, and Maximum delivery count** as desired. Choose the preferred authentication type, and click **Save**. If you get an error at this step, temporarily set your storage account to allow access from **All networks**, then try again. You can configure firewall on the storage account once the File upload configuration is complete.
:::image type="content" source="./media/iot-hub-managed-identity/file-upload.png" alt-text="IoT Hub file upload with msi":::
iot-hub Iot Hub Node Node Module Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-node-node-module-twin-getstarted.md
-
+ Title: Start with Azure IoT Hub module identity & module twin (Node.js) description: Learn how to create module identity and update module twin using IoT SDKs for Node.js. - ms.devlang: nodejs Previously updated : 04/26/2018 Last updated : 08/23/2021
[!INCLUDE [iot-hub-selector-module-twin-getstarted](../../includes/iot-hub-selector-module-twin-getstarted.md)] > [!NOTE]
-> [Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provides visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system based devices or firmware devices, it allows for isolated configuration and conditions for each component.
+> [Module identities and module twins](iot-hub-devguide-module-twins.md) are similar to Azure IoT Hub device identity and device twin, but provide finer granularity. While Azure IoT Hub device identity and device twin enable the back-end application to configure a device and provides visibility on the device's conditions, a module identity and module twin provide these capabilities for individual components of a device. On capable devices with multiple components, such as operating system-based devices or firmware devices, it allows for isolated configuration and conditions for each component.
At the end of this tutorial, you have two Node.js apps:
-* **CreateIdentities**, which creates a device identity, a module identity and associated security key to connect your device and module clients.
+* **CreateIdentities**, which creates a device identity, a module identity, and associated security keys to connect your device and module clients.
* **UpdateModuleTwinReportedProperties**, which sends updated module twin reported properties to your IoT Hub.
This app creates a device identity with ID **myFirstDevice** and a module identi
Run this using node add.js. It will give you a connection string for your device identity and another one for your module identity. > [!NOTE]
-> The IoT Hub identity registry only stores device and module identities to enable secure access to the IoT hub. The identity registry stores device IDs and keys to use as security credentials. The identity registry also stores an enabled/disabled flag for each device that you can use to disable access for that device. If your application needs to store other device-specific metadata, it should use an application-specific store. There is no enabled/disabled flag for module identities. For more information, see [IoT Hub developer guide](iot-hub-devguide-identity-registry.md).
+> The IoT Hub identity registry only stores device and module identities to enable secure access to the IoT hub. The identity registry stores device IDs and keys to use as security credentials. The identity registry also stores an enabled/disabled flag for each device that you can use to disable access for that device. If your application needs to store other device-specific metadata, it should use an application-specific store. There is no enabled/disabled flag for module identities. For more information, see [Understand the identity registry in your IoT Hub in the IoT Hub developer guide](iot-hub-devguide-identity-registry.md).
## Update the module twin using Node.js device SDK
In this section, you create a Node.js app on your simulated device that updates
![Azure portal module detail](./media/iot-hub-node-node-module-twin-getstarted/module-detail.png)
-2. Similar to you did in the step above, create a directory for your device code and use NPM to initialize it and install the device SDK (**npm install -S azure-iot-device-amqp\@modules-preview**).
+2. Similar to what you did in the step above, create a directory for your device code and use NPM to initialize it and install the device SDK (**npm install -S azure-iot-device-amqp\@modules-preview**).
> [!NOTE]
- > The npm install command may feel slow. Be patient, it's pulling down lots of code from the package repository.
+ > The npm install command may feel slow. Be patient -- it's pulling down lots of code from the package repository.
> [!NOTE] > If you see an error that says npm ERR! registry error parsing json, this is safe to ignore. If you see an error that says npm ERR! registry error parsing json, this is safe to ignore.
To continue getting started with IoT Hub and to explore other IoT scenarios, see
* [Getting started with device management](iot-hub-node-node-device-management-get-started.md)
-* [Getting started with IoT Edge](../iot-edge/quickstart-linux.md)
+* [Getting started with IoT Edge](../iot-edge/quickstart-linux.md)
iot-hub Iot Hub Restrict Outbound Network Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-restrict-outbound-network-access.md
+
+ Title: Restrict IoT Hub outbound network access and data loss prevention
+description: Developer guide - how to configure IoT Hub to egress to trusted locations only.
++++++ Last updated : 08/24/2021+++
+# Restrict outbound network access for Azure IoT Hub
+
+IoT Hub supports data egress to other services through [routing to custom endpoints](iot-hub-devguide-messages-d2c.md), [file upload](iot-hub-devguide-file-upload.md), and [device identity export](iot-hub-bulk-identity-mgmt.md). For extra security in an enterprise environment, use the `restrictOutboundNetworkAccess` API to restrict an IoT hub egress to only explicitly approved destinations. Currently, this feature isn't available in Azure portal.
+
+## Enabling the restriction
+
+To enable the feature, use any method to update the IoT Hub resource properties (a `PUT`) to set the `restrictOutboundNetworkAccess` to `true` while including an `allowedFqdnList` containing Fully Qualified Domain Names (FQDNs) as an array.
+
+An example showing the JSON representation to use with the [create or update method](/rest/api/iothub/iothubresource/createorupdate):
+
+```json
+{
+...
+ "properties": {
+ ...
+ "restrictOutboundNetworkAccess": true,
+ "allowedFqdnList": [
+ "my-eventhub.servicebus.windows.net",
+ "iothub-ns-built-in-endpoint-2917414-51ea2487eb.servicebus.windows.net"
+ ]
+ ...
+ },
+ "sku": {
+ "name": "S1",
+ "capacity": 1
+ }
+ }
+ }
+}
+```
+To make the same update using Azure CLI, run
+
+```azurecli-interactive
+az resource update -n <iothubName> -g <resourceGroupName> --resource-type Microsoft.Devices/IotHubs --set properties.restrictOutboundNetworkAccess=true properties.allowedFqdnList="['my-eventhub.servicebus.windows.net','iothub-ns-built-in-endpoint-2917414-51ea2487eb.servicebus.windows.net']"
+```
+
+## Restricting outbound network access with existing routes
+
+Once `restrictOutboundNetworkAccess` is set to `true`, attempts to emit data to destinations outside of the allowed FQDNs fail. Even existing configured routes stop working if the custom endpoint isn't included in the allowed FQDN list.
+
+## Built-in endpoint
+
+If `restrictOutboundNetworkAccess` is set to `true`, the built-in event hub compatible endpoint isn't exempt for the restriction. In other words, you must include the built-in endpoint FQDN in the allowed FQDN list for it to continue to work.
+
+## Next steps
+
+- To use managed identity for data egress, see [IoT Hub support for managed identities](iot-hub-managed-identity.md).
+- To restrict inbound network access, see [Managing public network access for your IoT hub](iot-hub-public-network-access.md) and [IoT Hub support for virtual networks with Private Link](virtual-network-support.md).
iot-hub Iot Hub Troubleshoot Error 409001 Devicealreadyexists https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-troubleshoot-error-409001-devicealreadyexists.md
Title: Troubleshooting Azure IoT Hub error 409001 DeviceAlreadyExists description: Understand how to fix error 409001 DeviceAlreadyExists - Previously updated : 01/30/2020 Last updated : 07/07/2021 #Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 409001 DeviceAlreadyExists errors.
iot-hub Iot Hub Troubleshoot Error 503003 Partitionnotfound https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-troubleshoot-error-503003-partitionnotfound.md
Title: Troubleshooting Azure IoT Hub error 503003 PartitionNotFound description: Understand how to fix error 503003 PartitionNotFound - Previously updated : 01/30/2020 Last updated : 07/07/2021 #Customer intent: As a developer or operator for Azure IoT Hub, I want to resolve 503003 PartitionNotFound errors.
iot-hub Iot Hub Understand Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-understand-ip-address.md
You may use these IP address prefixes to control connectivity between IoT Hub an
| Ensure your IoT Hub device endpoint receives connections only from your devices and network assets | [Device-to-cloud](./iot-hub-devguide-messaging.md), and [cloud-to-device](./iot-hub-devguide-messages-c2d.md) messaging, [direct methods](./iot-hub-devguide-direct-methods.md), [device and module twins](./iot-hub-devguide-device-twins.md) and [device streams](./iot-hub-device-streams-overview.md) | Use IoT Hub [IP filter feature](iot-hub-ip-filtering.md) to allow connections from your devices and network asset IP addresses (see [limitations](#limitations-and-workarounds) section). | | Ensure your routes' custom endpoint resources (storage accounts, service bus and event hubs) are reachable from your network assets only | [Message routing](./iot-hub-devguide-messages-d2c.md) | Follow your resource's guidance on restrict connectivity (for example via [firewall rules](../storage/common/storage-network-security.md), [private links](../private-link/private-endpoint-overview.md), or [service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md)); use _AzureIoTHub_ service tags to discover IoT Hub IP address prefixes and add ALLOW rules for those IP prefixes on your resource's firewall configuration (see [limitations](#limitations-and-workarounds) section). | -- ## Best practices
-* When adding ALLOW rules in your devices' firewall configuration, it is best to provide specific [ports used by applicable protocols](./iot-hub-devguide-protocols.md#port-numbers).
+* The IP address of an IoT hub is subject to change without notice. To minimize disruption, use the IoT hub hostname (for example, myhub.azure-devices.net) for networking and firewall configuration whenever possible.
+
+* For constrained IoT systems without domain name resolution (DNS), IoT Hub IP address ranges are published periodically via service tags before changes taking effect. It is therefore important that you develop processes to regularly retrieve and use the latest service tags. This process can be automated via the [service tags discovery API](../virtual-network/service-tags-overview.md#service-tags-on-premises). Note that Service tags discovery API is still in preview and in some cases may not produce the full list of tags and IP addresses. Until discovery API is generally available, consider using the [service tags in downloadable JSON format](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files).
-* The IP address prefixes of IoT hub are subject to change. These changes are published periodically via service tags before taking effect. It is therefore important that you develop processes to regularly retrieve and use the latest service tags. This process can be automated via the [service tags discovery API](../virtual-network/service-tags-overview.md#service-tags-on-premises). Note that Service tags discovery API is still in preview and in some cases may not produce the full list of tags and IP addresses. Until discovery API is generally available, consider using the [service tags in downloadable JSON format](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files).
* Use the *AzureIoTHub.[region name]* tag to identify IP prefixes used by IoT hub endpoints in a specific region. To account for datacenter disaster recovery, or [regional failover](iot-hub-ha-dr.md) ensure connectivity to IP prefixes of your IoT Hub's geo-pair region is also enabled. * Setting up firewall rules in IoT Hub may block off connectivity needed to run Azure CLI and PowerShell commands against your IoT Hub. To avoid this, you can add ALLOW rules for your clients' IP address prefixes to re-enable CLI or PowerShell clients to communicate with your IoT Hub.
+* When adding ALLOW rules in your devices' firewall configuration, it is best to provide specific [ports used by applicable protocols](./iot-hub-devguide-protocols.md#port-numbers).
## Limitations and workarounds
iot-hub Tutorial Routing Config Message Routing CLI https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-routing-config-message-routing-CLI.md
Previously updated : 04/04/2021 Last updated : 8/20/2021 #Customer intent: As a developer, I want to be able to route messages sent to my IoT hub to different destinations based on properties stored in the message. I want to be able to set up the resource and the routing using the Azure CLI.
iot-hub Tutorial Routing Config Message Routing Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/tutorial-routing-config-message-routing-PowerShell.md
New-AzIotHub -ResourceGroupName $resourceGroup `
-Location $location ` -Units 1
-# Add a consumer group to the IoT hub for the 'events' endpoint.
+# Add a consumer group to the IoT hub.
Add-AzIotHubEventHubConsumerGroup -ResourceGroupName $resourceGroup ` -Name $iotHubName `
- -EventHubConsumerGroupName $iotHubConsumerGroup `
- -EventHubEndpointName "events"
+ -EventHubConsumerGroupName $iotHubConsumerGroup
# The storage account name must be globally unique, so add a random value to the end. $storageAccountName = "contosostorage" + $randomValue
$endpointType = "servicebusqueue"
$routeName = "ContosoSBQueueRoute" $condition = 'level="critical"'
-# Add the routing endpoint, using the connection string property from the key.
+# If this script fails on the next statement (Add-AzIotHubRoutingEndpoint),
+# put the pause in and run it again. Note that if you're running it
+# interactively, you can just stop it and then run the rest, because
+# you have already set the variables before you get to this point.
+#
+# Pause for 90 seconds to allow previous steps to complete.
+# Then report it to the IoT team here:
+# https://github.com/Azure/azure-powershell/issues
+# pause for 90 seconds and then start again.
+# This way, it if didn't get to finish before it tried to move on,
+# now it will have time to finish.
+## Start-Sleep -Seconds 90
+
+# This command is the one that sometimes doesn't work. It's as if it doesn't have time to
+# finish before it moves to the next line.
+# The error from Add-AzIotHubRoutingEndpoint is "Operation returned an invalid status code 'BadRequest'".
+# This command adds the routing endpoint, using the connection string property from the key.
+# This will definitely work if you execute the Sleep command first (it's in the line above).
Add-AzIotHubRoutingEndpoint ` -ResourceGroupName $resourceGroup ` -Name $iotHubName `
key-vault About Keys Secrets Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/key-vault/general/about-keys-secrets-certificates.md
Storage account keys|/storage|Supported|Not supported
||| - **Cryptographic keys**: Supports multiple key types and algorithms, and enables the use of software-protected and HSM-protected keys. For more information, see [About keys](../keys/about-keys.md). - **Secrets**: Provides secure storage of secrets, such as passwords and database connection strings. For more information, see [About secrets](../secrets/about-secrets.md).-- **Certificates**: Supports certificates, which are built on top of keys and secrets and add an automated renewal feature. For more information, see [About certificates](../certificates/about-certificates.md).
+- **Certificates**: Supports certificates, which are built on top of keys and secrets and add an automated renewal feature. Keep in mind when a certificate is created, an addressable key and secret are also created with the same name. For more information, see [About certificates](../certificates/about-certificates.md).
- **Azure Storage account keys**: Can manage keys of an Azure Storage account for you. Internally, Key Vault can list (sync) keys with an Azure Storage Account, and regenerate (rotate) the keys periodically. For more information, see [Manage storage account keys with Key Vault](../secrets/overview-storage-keys.md). For more general information about Key Vault, see [About Azure Key Vault](overview.md). For more information about Managed HSM pools, see What is [Azure Key Vault Managed HSM?](../managed-hsm/overview.md)
lab-services Approaches For Custom Image Creation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/approaches-for-custom-image-creation.md
# Recommended approaches for creating custom images This article describes the following recommended approaches for creating a custom image: -- Create and save a custom image from a [labΓÇÖs template virtual machine (VM)](how-to-create-manage-template.md).
+- Create and save a custom image from a [lab's template virtual machine (VM)](how-to-create-manage-template.md).
- Bring a custom image from outside of the context of a lab by using: - An [Azure VM](https://azure.microsoft.com/services/virtual-machines/). - A VHD in your physical lab environment.
-## Save a custom image from a lab's template VM
+## Save a custom image from a lab's template VM
-Using a labΓÇÖs template VM to create and save a custom image is the simplest way to create an image because it is supported using Lab Services' portal. As a result, both IT departments and educators can create custom images using a labΓÇÖs template VM.
+Using a lab's template VM to create and save a custom image is the simplest way to create an image because it's supported by using the Azure Lab Services portal. As a result, both IT departments and educators can create custom images by using a lab's template VM.
-For example, you can start with one of the Marketplace images and then install additional software applications, tooling, etc. that are needed for a class. After youΓÇÖve finished setting up the image, you can save it in the [connected shared image gallery](how-to-attach-detach-shared-image-gallery.md) so that you and other educators can use the image to create new labs.
+For example, you can start with one of the Azure Marketplace images and then install the software applications and tooling that are needed for a class. After you've finished setting up the image, you can save it in the [connected shared image gallery](how-to-attach-detach-shared-image-gallery.md) so that you and other educators can use the image to create new labs.
There are a few key points to be aware of with this approach:-- Azure Lab Services automatically saves a *specialized* image when you export the image from the template VM. In most cases, specialized images are well-suited for creating new labs because the image retains machine-specific information and user profiles. Using a specialized image helps to ensure that the installed software will run the same when you use the image to create new labs. If you need to create a *generalized* image, you must use one of the other recommended approaches in this article to create a custom image.
- You can create labs based on both generalized and specialized images in Azure Lab Services. For more information about the differences, read the article [Generalized and specialized images](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images).
+- Lab Services automatically saves a *specialized* image when you export the image from the template VM. In most cases, specialized images are well suited for creating new labs because the image retains machine-specific information and user profiles. Using a specialized image helps to ensure that the installed software will run the same when you use the image to create new labs. If you need to create a *generalized* image, you must use one of the other recommended approaches in this article to create a custom image.
-- For more advanced scenarios with setting up your image, you may find it helpful to instead create an image outside of labs using either an Azure VM or a VHD from your physical lab environment. Read the next sections for more information.
+ You can create labs based on both generalized and specialized images in Azure Lab Services. For more information about the differences, see [Generalized and specialized images](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images).
-### How to save a custom image from a lab's template VM
+- For more advanced scenarios with setting up your image, you might find it helpful to instead create an image outside of labs by using either an Azure VM or a VHD from your physical lab environment. Read the next sections for more information.
-You can use a lab's template VM to create either Windows or Linux custom images. For more information, read the article on [how to save the image to a shared image gallery](how-to-use-shared-image-gallery.md#save-an-image-to-the-shared-image-gallery).
+### Use a lab's template VM to save a custom image
+
+You can use a lab's template VM to create either Windows or Linux custom images. For more information, see [Save the image to a shared image gallery](how-to-use-shared-image-gallery.md#save-an-image-to-the-shared-image-gallery).
## Bring a custom image from an Azure VM
-Another approach is to use an Azure VM to set up a custom image. After youΓÇÖve finished setting up the image, you can save it to a shared image gallery so that you and your colleagues can use the image to create new labs.
+Another approach is to use an Azure VM to set up a custom image. After you've finished setting up the image, you can save it to a shared image gallery so that you and your colleagues can use the image to create new labs.
Using an Azure VM gives you more flexibility:-- You can create either [generalized or specialized](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) images. Otherwise, if you use a labΓÇÖs template VM to [export an image](how-to-use-shared-image-gallery.md) the image is always specialized.-- You have access to more advanced features of an Azure VM that may be helpful for setting up an image. For example, you can use [extensions](../virtual-machines/extensions/overview.md) to do post-deployment configuration and automation. Also, you can access the VMΓÇÖs [boot diagnostics](../virtual-machines/boot-diagnostics.md) and [serial console](/troubleshoot/azure/virtual-machines/serial-console-overview).
+- You can create either [generalized or specialized](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) images. Otherwise, if you use a lab's template VM to [export an image](how-to-use-shared-image-gallery.md) the image is always specialized.
+- You have access to more advanced features of an Azure VM that might be helpful for setting up an image. For example, you can use [extensions](../virtual-machines/extensions/overview.md) to do post-deployment configuration and automation. Also, you can access the VM's [boot diagnostics](../virtual-machines/boot-diagnostics.md) and [serial console](/troubleshoot/azure/virtual-machines/serial-console-overview).
-However, setting up an image using an Azure VM is more complex. As a result, IT departments are typically responsible for creating custom images on an Azure VMs.
+Setting up an image by using an Azure VM is more complex. As a result, IT departments are typically responsible for creating custom images on Azure VMs.
-### How to bring a custom image from an Azure VM
+### Use an Azure VM to set up a custom image
Here are the high-level steps to bring a custom image from an Azure VM:
-1. Create an [Azure VM](https://azure.microsoft.com/services/virtual-machines/) using a Windows or Linux Marketplace image.
-1. Connect to the Azure VM and install additional software. You can also make other customizations that are needed for your lab.
-1. When youΓÇÖve finished setting up the image, [save the VM's image to a shared image gallery](../virtual-machines/image-version-vm-powershell.md). As part of this, you will also need to create the imageΓÇÖs definition and version.
-1. Once the custom image is saved in the gallery, your image can be used to create new labs.
+1. Create an [Azure VM](https://azure.microsoft.com/services/virtual-machines/) by using a Windows or Linux Marketplace image.
+1. Connect to the Azure VM and install more software. You can also make other customizations that are needed for your lab.
+1. When you've finished setting up the image, [save the VM's image to a shared image gallery](../virtual-machines/image-version-vm-powershell.md). As part of this step, you'll also need to create the image's definition and version.
+1. After the custom image is saved in the gallery, you can use your image to create new labs.
-The steps vary depending on if you are creating a custom Windows or Linux image. Read the following articles for the detailed steps:
+The steps vary depending on if you're creating a custom Windows or Linux image. Read the following articles for the detailed steps:
-- [How to bring a custom Windows image from an Azure VM](how-to-bring-custom-windows-image-azure-vm.md)-- [How to bring a custom Linux image from an Azure VM](how-to-bring-custom-linux-image-azure-vm.md)
+- [Bring a custom Windows image from an Azure VM](how-to-bring-custom-windows-image-azure-vm.md)
+- [Bring a custom Linux image from an Azure VM](how-to-bring-custom-linux-image-azure-vm.md)
## Bring a custom image from a VHD in your physical lab environment
-The third approach to consider, is to bring a custom image from a VHD in your physical lab environment to a shared image gallery. Once the image is in a shared image gallery, you and other educators can use the image to create new labs.
+The third approach to consider is to bring a custom image from a VHD in your physical lab environment to a shared image gallery. After the image is in a shared image gallery, you and other educators can use the image to create new labs.
-Here are a few reasons why you may want to use this approach:
+Here are a few reasons why you might want to use this approach:
-- You can create either [generalized or specialized](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) images to use in your labs. Otherwise, if you use a [labΓÇÖs template VM](how-to-use-shared-image-gallery.md) to export an image, the image is always specialized.-- You can access resources that exist within your on-prem environment. For example, you may have large installation files in your on-prem environment that are too time consuming to copy to a labΓÇÖs template VM.-- You can upload images created using other tools, such as [Microsoft Endpoint Configuration Manager](/mem/configmgr/core/understand/introduction), so that you donΓÇÖt have to manually set up an image using a labΓÇÖs template VM.
+- You can create either [generalized or specialized](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) images to use in your labs. Otherwise, if you use a [lab's template VM](how-to-use-shared-image-gallery.md) to export an image, the image is always specialized.
+- You can access resources that exist within your on-premises environment. For example, you might have large installation files in your on-premises environment that are too time consuming to copy to a lab's template VM.
+- You can upload images created by using other tools, such as [Microsoft Endpoint Configuration Manager](/mem/configmgr/core/understand/introduction), so that you don't have to manually set up an image by using a lab's template VM.
-Bringing a custom image from a VHD is the most advanced approach because you must ensure that the image is set up properly so that it works within Azure. As a result, IT departments are typically responsible for creating custom images from VHDs.
+Bringing a custom image from a VHD is the most advanced approach because you must ensure that the image is set up properly so that it works within Azure. As a result, IT departments are typically responsible for creating custom images from VHDs.
-### How to bring a custom image from a VHD
+### Bring a custom image from a VHD
Here are the high-level steps to bring a custom image from a VHD:
-1. Use [Windows Hyper-V](/virtualization/hyper-v-on-windows/about/) on your on-premises machine to create a Windows or Linux VHD.
-1. Connect to the Hyper-V VM and install additional software. You can also make other customizations that are needed for your lab.
-1. When youΓÇÖve finished setting up the image, upload the VHD to create a [managed disk](../virtual-machines/managed-disks-overview.md) in Azure.
-1. From the managed disk, create the [imageΓÇÖs definition](../virtual-machines/shared-image-galleries.md#image-definitions) and version in a shared image gallery.
-1. Once the custom image is saved in the gallery, the image can be used to create new labs.
+1. Use [Windows Hyper-V](/virtualization/hyper-v-on-windows/about/) on your on-premises machine to create a Windows or Linux VHD.
+1. Connect to the Hyper-V VM and install more software. You can also make other customizations that are needed for your lab.
+1. When you've finished setting up the image, upload the VHD to create a [managed disk](../virtual-machines/managed-disks-overview.md) in Azure.
+1. From the managed disk, create the [image's definition](../virtual-machines/shared-image-galleries.md#image-definitions) and version in a shared image gallery.
+1. After the custom image is saved in the gallery, you can use the image to create new labs.
-The steps vary depending on if you are creating a custom Windows or Linux image. Read the following articles for the detailed steps:
+The steps vary depending on if you're creating a custom Windows or Linux image. Read the following articles for the detailed steps:
-- [How to bring a custom Windows image from a VHD](upload-custom-image-shared-image-gallery.md)-- [How to bring a custom Linux image from a VHD](how-to-bring-custom-linux-image-vhd.md)
+- [Bring a custom Windows image from a VHD](upload-custom-image-shared-image-gallery.md)
+- [Bring a custom Linux image from a VHD](how-to-bring-custom-linux-image-vhd.md)
## Next steps * [Shared image gallery overview](../virtual-machines/shared-image-galleries.md) * [Attach or detach a shared image gallery](how-to-attach-detach-shared-image-gallery.md)
-* [How to use shared image gallery](how-to-use-shared-image-gallery.md)
+* [Use a shared image gallery](how-to-use-shared-image-gallery.md)
lab-services How To Bring Custom Linux Image Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/how-to-bring-custom-linux-image-azure-vm.md
# Bring a Linux custom image from an Azure virtual machine
-The steps in this article show how to import a custom image that starts from an [Azure virtual machine (VM)](https://azure.microsoft.com/services/virtual-machines/). With this approach, you set up an image on an Azure VM and import the image into a shared image gallery so that it can be used within Lab Services. Before you use this approach for creating a custom image, read the article [Recommended approaches for creating custom images](approaches-for-custom-image-creation.md) to decide the best approach for your scenario.
+The steps in this article show how to import a custom image that starts from an [Azure virtual machine (VM)](https://azure.microsoft.com/services/virtual-machines/). With this approach, you set up an image on an Azure VM and import the image into a shared image gallery so that it can be used within Azure Lab Services. Before you use this approach for creating a custom image, read [Recommended approaches for creating custom images](approaches-for-custom-image-creation.md) to decide the best approach for your scenario.
## Prerequisites
-You will need permission to create an Azure VM in your school's Azure subscription to complete the steps in this article.
+You'll need permission to create an Azure VM in your school's Azure subscription to complete the steps in this article.
## Prepare a custom image on an Azure VM
-1. Create an Azure VM using the [Azure portal](../virtual-machines/windows/quick-create-portal.md), [PowerShell](../virtual-machines/windows/quick-create-powershell.md), the [Azure CLI](../virtual-machines/windows/quick-create-cli.md), or from an [ARM template](../virtual-machines/windows/quick-create-template.md).
+1. Create an Azure VM by using the [Azure portal](../virtual-machines/windows/quick-create-portal.md), [PowerShell](../virtual-machines/windows/quick-create-powershell.md), the [Azure CLI](../virtual-machines/windows/quick-create-cli.md), or an [Azure Resource Manager template](../virtual-machines/windows/quick-create-template.md).
- When you specify the disk settings, ensure the disk's size is *not* greater than 128 GB. 1. Install software and make any necessary configuration changes to the Azure VM's image.
-1. Optionally, you can generalize the image. If you decide to create a generalized image, follow the steps outlined in [Step 1: Deprovision the VM](../virtual-machines/linux/capture-image.md#step-1-deprovision-the-vm). When you use the **-deprovision+user** command, this generalizes the image. However, it does not guarantee that the image is cleared of all sensitive information or that it is suitable for redistribution.
+1. Optionally, you can generalize the image. If you decide to create a generalized image, follow the steps outlined in [Step 1: Deprovision the VM](../virtual-machines/linux/capture-image.md#step-1-deprovision-the-vm). When you use the **-deprovision+user** command, it generalizes the image. But it doesn't guarantee that the image is cleared of all sensitive information or that it's suitable for redistribution.
Otherwise, if you decide to create a specialized image, you can skip to the next step.
- You should create a specialized image if you want to maintain machine-specific information and user profiles. For more information about the differences between generalized and specialized images, see [Generalized and specialized images](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images).
+ Create a specialized image if you want to maintain machine-specific information and user profiles. For more information about the differences between generalized and specialized images, see [Generalized and specialized images](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images).
## Import the custom image into a shared image gallery 1. In a shared image gallery, [create an image definition](../virtual-machines/windows/shared-images-portal.md#create-an-image-definition) or choose an existing image definition. - Choose **Gen 1** for the **VM generation**.
- - Choose whether you are creating a **specialized** or **generalized** image for the **Operating system state**.
+ - Choose whether you're creating a **specialized** or **generalized** image for the **Operating system state**.
For more information about the values you can specify for an image definition, see [Image definitions](../virtual-machines/shared-image-galleries.md#image-definitions). You can also choose to use an existing image definition and create a new version for your custom image. 1. [Create an image version](../virtual-machines/windows/shared-images-portal.md#create-an-image-version).
- - The **Version number** property uses the following format: *MajorVersion.MinorVersion.Patch*.
- - For the **Source**, choose **Disks and/or snapshots** from the drop-down list.
+ - The **Version number** property uses the following format: *MajorVersion.MinorVersion.Patch*.
+ - For the **Source**, select **Disks and/or snapshots** from the dropdown list.
- For the **OS disk** property, choose your Azure VM's disk that you created in previous steps.
-You can also automate the above steps using PowerShell. See the following script and accompanying ReadMe for more information:
-- [Bring image to a shared image gallery script](https://github.com/Azure/azure-devtestlab/tree/master/samples/ClassroomLabs/Scripts/BringImageToSharedImageGallery/)
+You can also automate the preceding steps by using PowerShell. For more information, see the script and ReadMe in [Bring image to a shared image gallery script](https://github.com/Azure/azure-devtestlab/tree/master/samples/ClassroomLabs/Scripts/BringImageToSharedImageGallery/).
## Create a lab
-1. [Create the lab](tutorial-setup-classroom-lab.md) in Lab Services and select the custom image from the shared image gallery.
+[Create the lab](tutorial-setup-classroom-lab.md) in Lab Services, and select the custom image from the shared image gallery.
## Next steps * [Shared image gallery overview](../virtual-machines/shared-image-galleries.md) * [Attach or detach a shard image gallery](how-to-attach-detach-shared-image-gallery.md)
-* [How to use a shared image gallery](how-to-use-shared-image-gallery.md)
+* [Use a shared image gallery](how-to-use-shared-image-gallery.md)
lab-services How To Bring Custom Linux Image Vhd https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/how-to-bring-custom-linux-image-vhd.md
# Bring a Linux custom image from your physical lab environment
-The steps in this article show how to import a Linux custom image that starts from your physical lab environment. With this approach, you create a VHD from your physical environment and import the VHD into a shared image gallery so that it can be used within Lab Services. Before you use this approach for creating a custom image, read the article [Recommended approaches for creating custom images](approaches-for-custom-image-creation.md) to decide which approach is best for your scenario.
+The steps in this article show how to import a Linux custom image that starts from your physical lab environment. With this approach, you create a VHD from your physical environment and import the VHD into a shared image gallery so that it can be used within Azure Lab Services. Before you use this approach for creating a custom image, read [Recommended approaches for creating custom images](approaches-for-custom-image-creation.md) to decide which approach is best for your scenario.
-Azure endorses a variety of [distributions and versions](../virtual-machines/linux/endorsed-distros.md#supported-distributions-and-versions). The steps to bring a custom Linux image from a VHD varies for each distribution. Every distribution is different because each one has unique prerequisites that must be set up to run on Azure.
+Azure endorses a variety of [distributions and versions](../virtual-machines/linux/endorsed-distros.md#supported-distributions-and-versions). The steps to bring a custom Linux image from a VHD varies for each distribution. Every distribution is different because each one has unique prerequisites that must be set up to run on Azure.
-In this article, weΓÇÖll show the steps to bring a custom Ubuntu 16.04\18.04\20.04 image from a VHD. For information on using a VHD to create custom images for other distributions, see the article [Generic steps for Linux distributions](../virtual-machines/linux/create-upload-generic.md).
+In this article, we'll show the steps to bring a custom Ubuntu 16.04\18.04\20.04 image from a VHD. For information on using a VHD to create custom images for other distributions, see [Generic steps for Linux distributions](../virtual-machines/linux/create-upload-generic.md).
## Prerequisites
-You will need permission to create an [Azure managed disk](../virtual-machines/managed-disks-overview.md) in your school's Azure subscription to complete the steps in this article.
+You'll need permission to create an [Azure managed disk](../virtual-machines/managed-disks-overview.md) in your school's Azure subscription to complete the steps in this article.
-When moving images from a physical lab environment to Lab Services, you should restructure each image so that it only includes software needed for a lab's class. For more information, read the [Moving from a Physical Lab to Azure Lab Services](https://techcommunity.microsoft.com/t5/azure-lab-services/moving-from-a-physical-lab-to-azure-lab-services/ba-p/1654931) blog post.
+When you move images from a physical lab environment to Lab Services, restructure each image so that it only includes software needed for a lab's class. For more information, read the [Moving from a Physical Lab to Azure Lab Services](https://techcommunity.microsoft.com/t5/azure-lab-services/moving-from-a-physical-lab-to-azure-lab-services/ba-p/1654931) blog post.
-## Prepare a custom image using Hyper-V Manager
+## Prepare a custom image by using Hyper-V Manager
-The following steps show how to create an Ubuntu 16.04\18.04\20.04 image from a Hyper-V virtual machine (VM) using Windows Hyper-V Manager.
+The following steps show how to create an Ubuntu 16.04\18.04\20.04 image from a Hyper-V virtual machine (VM) by using Windows Hyper-V Manager.
-1. Download the official [Linux Ubuntu Server](https://ubuntu.com/server/docs) image to your Windows host machine that you will use to set up the custom image on a Hyper-V VM.
+1. Download the official [Linux Ubuntu Server](https://ubuntu.com/server/docs) image to your Windows host machine that you'll use to set up the custom image on a Hyper-V VM.
- We recommend using an Ubuntu image that does *not* have the [GNOME](https://www.gnome.org/) GUI desktop installed. GNOME currently has a conflict with the Azure Linux Agent which is needed for the image to work properly in Azure Lab Services. For example, we recommend that you use the Ubuntu Server image and install a different GUI desktop, such as XFCE or MATE.
+ We recommend using an Ubuntu image that does *not* have the [GNOME](https://www.gnome.org/) GUI desktop installed. GNOME currently has a conflict with the Azure Linux Agent, which is needed for the image to work properly in Lab Services. For example, use the Ubuntu Server image and install a different GUI desktop, such as XFCE or MATE.
- Ubuntu also publishes prebuilt [Azure VHDs for download](https://cloud-images.ubuntu.com/). However, these VHDs are intended for creating custom images a from Linux host machine and hypervisor, such as KVM. These VHDs require that you first set the default user password which can only be done using Linux tooling, such as qemu, which aren't available for Windows. As a result, when you create a custom image using Windows Hyper-V, you won't be able to connect to these VHDs to make image customizations. For more information about the prebuilt Azure VHDs, read [Ubuntu's documentation](https://help.ubuntu.com/community/UEC/Images?_ga=2.114783623.1858181609.1624392241-1226151842.1623682781#QEMU_invocation).
+ Ubuntu also publishes prebuilt [Azure VHDs for download](https://cloud-images.ubuntu.com/). These VHDs are intended for creating custom images from a Linux host machine and hypervisor, such as KVM. These VHDs require that you first set the default user password, which can only be done by using Linux tooling, such as qemu, which isn't available for Windows. As a result, when you create a custom image by using Windows Hyper-V, you won't be able to connect to these VHDs to make image customizations. For more information about the prebuilt Azure VHDs, read [Ubuntu's documentation](https://help.ubuntu.com/community/UEC/Images?_ga=2.114783623.1858181609.1624392241-1226151842.1623682781#QEMU_invocation).
-1. Start with a Hyper-V VM in your physical lab environment that has been created from your image. Read the article [on how to create a virtual machine in Hyper-V](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-machine-in-hyper-v) for more information as set the settings as shown below:
- - The VM must be created as a **Generation 1** VM.
- - Use the **Default Switch** network configuration option to allow the VM to connect to the internet.
- - In the **Connect Virtual Hard Disk** settings, the disk's **Size** must *not* be greater than 128 GB as shown in the below image.
+1. Start with a Hyper-V VM in your physical lab environment that was created from your image. For more information, read the article on [how to create a virtual machine in Hyper-V](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-machine-in-hyper-v). Set the settings as shown here:
+ - The VM must be created as a **Generation 1** VM.
+ - Use the **Default Switch** network configuration option to allow the VM to connect to the internet.
+ - In the **Connect Virtual Hard Disk** settings, the disk's **Size** must *not* be greater than 128 GB, as shown in the following image.
- :::image type="content" source="./media/upload-custom-image-shared-image-gallery/connect-virtual-hard-disk.png" alt-text="Connect virtual hard disk":::
+ :::image type="content" source="./media/upload-custom-image-shared-image-gallery/connect-virtual-hard-disk.png" alt-text="Screenshot that shows the Connect Virtual Hard Disk screen.":::
- In the **Installation Options** settings, select the **.iso** file that you previously downloaded from Ubuntu.
- Images with disk size greater than 128 GB are *not* supported by Lab Services.
+ Images with a disk size greater than 128 GB are *not* supported by Lab Services.
-1. Connect to the Hyper-V VM and prepare it for Azure by following the steps in this article:
- - [Manual steps to create and upload an Ubuntu VHD](../virtual-machines/linux/create-upload-ubuntu.md#manual-steps)
+1. Connect to the Hyper-V VM and prepare it for Azure by following the steps in [Manual steps to create and upload an Ubuntu VHD](../virtual-machines/linux/create-upload-ubuntu.md#manual-steps).
- The steps to prepare a Linux image for Azure vary based on the distribution. See the article [distributions and versions](../virtual-machines/linux/endorsed-distros.md#supported-distributions-and-versions) for more information and specific steps for each distribution.
+ The steps to prepare a Linux image for Azure vary based on the distribution. For more information and specific steps for each distribution, see [distributions and versions](../virtual-machines/linux/endorsed-distros.md#supported-distributions-and-versions).
- When you follow the above steps, there are a few important points to highlight:
- - The steps create a [generalized](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) image when when you run the **deprovision+user** command. However, it does not guarantee that the image is cleared of all sensitive information or that it is suitable for redistribution.
- - The final step is to convert the **VHDX** file to a **VHD** file. Here are equivalent steps that show how to do this with **Hyper-V Manager**:
+ When you follow the preceding steps, there are a few important points to highlight:
+ - The steps create a [generalized](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) image when you run the **deprovision+user** command. But it doesn't guarantee that the image is cleared of all sensitive information or that it's suitable for redistribution.
+ - The final step is to convert the **VHDX** file to a **VHD** file. Here are equivalent steps that show how to do it with **Hyper-V Manager**:
- 1. Navigate to **Hyper-V Manager** -> **Action** -> **Edit Disk**.
- 1. Next, **Convert** the disk from a VHDX to a VHD.
- 1. For the **Disk Type**, select **Fixed size**.
- - If you also choose to expand the disk size at this point, make sure that you do *not* exceed 128 GB.
- :::image type="content" source="./media/upload-custom-image-shared-image-gallery/choose-action.png" alt-text="Choose action":::
+ 1. Go to **Hyper-V Manager** > **Action** > **Edit Disk**.
+ 1. Next, **Convert** the disk from a VHDX to a VHD.
+ 1. For the **Disk Type**, select **Fixed size**.
+ - If you also choose to expand the disk size at this point, make sure that you do *not* exceed 128 GB.
+ :::image type="content" source="./media/upload-custom-image-shared-image-gallery/choose-action.png" alt-text="Screenshot that shows the Choose Action screen.":::
To help with resizing the VHD and converting to a VHDX, you can also use the following PowerShell cmdlets:+ - [Resize-VHD](/powershell/module/hyper-v/resize-vhd?view=windowsserver2019-ps) - [Convert-VHD](/powershell/module/hyper-v/convert-vhd?view=windowsserver2019-ps) ## Upload the custom image to a shared image gallery 1. Upload the VHD to Azure to create a managed disk.
- 1. You can use either Storage Explorer or AzCopy from the command line, as shown in [Upload a VHD to Azure or copy a managed disk to another region](../virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell.md).
+ 1. You can use either Storage Explorer or AzCopy from the command line, as shown in [Upload a VHD to Azure or copy a managed disk to another region](../virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell.md).
1. After you've uploaded the VHD, you should now have a managed disk that you can see in the Azure portal.
- If your machine goes to sleep or locks, the upload process may get interrupted and fail. Also, make sure that when AzCopy completes, that you revoke SAS access to the disk. Otherwise, when you attempt to create an image from the disk, you will see an error: **Operation 'Create Image' is not supported with disk 'your disk name' in state 'Active Upload'. Error Code: OperationNotAllowed**
+ If your machine goes to sleep or locks, the upload process might get interrupted and fail. Also, make sure that when AzCopy completes, that you revoke SAS access to the disk. Otherwise, when you attempt to create an image from the disk, you'll see the error "Operation 'Create Image' is not supported with disk 'your disk name' in state 'Active Upload'. Error Code: OperationNotAllowed*."
- The Azure portal's **Size+Performance** tab for the managed disk allows you to change your disk size. As mentioned before, the size must *not* be greater than 128 GB.
+ Use the Azure portal's **Size+Performance** tab for the managed disk to change your disk size. As mentioned before, the size must *not* be greater than 128 GB.
1. In a shared image gallery, create an image definition and version:
- 1. [Create an image definition](../virtual-machines/windows/shared-images-portal.md#create-an-image-definition).
+ 1. [Create an image definition](../virtual-machines/windows/shared-images-portal.md#create-an-image-definition):
- Choose **Gen 1** for the **VM generation**. - Choose **Linux** for the **Operating system**. - Choose **generalized** for the **Operating system state**.
To help with resizing the VHD and converting to a VHDX, you can also use the fol
You can also choose to use an existing image definition and create a new version for your custom image.
-1. [Create an image version](../virtual-machines/windows/shared-images-portal.md#create-an-image-version).
- - The **Version number** property uses the following format: *MajorVersion.MinorVersion.Patch*. When you use Lab Services to create a lab and choose a custom image, the most recent version of the image is automatically used. The most recent version is chosen based on the highest value of MajorVersion, then MinorVersion, then Patch.
- - For the **Source**, choose **Disks and/or snapshots** from the drop-down list.
+1. [Create an image version](../virtual-machines/windows/shared-images-portal.md#create-an-image-version):
+ - The **Version number** property uses the following format: *MajorVersion.MinorVersion.Patch*. When you use Lab Services to create a lab and choose a custom image, the most recent version of the image is automatically used. The most recent version is chosen based on the highest value of MajorVersion, then MinorVersion, and then Patch.
+ - For the **Source**, select **Disks and/or snapshots** from the dropdown list.
- For the **OS disk** property, choose the disk that you created in previous steps. For more information about the values you can specify for an image definition, see [Image versions](../virtual-machines/shared-image-galleries.md#image-versions). ## Create a lab
-1. [Create the lab](tutorial-setup-classroom-lab.md) in Lab Services and select the custom image from the shared image gallery.
+[Create the lab](tutorial-setup-classroom-lab.md) in Lab Services and select the custom image from the shared image gallery.
- If you expanded the disk *after* the OS was installed on the original Hyper-V VM, you may also need to extend the partition in Linux's filesystem to use the unallocated disk space:
- - Log into the lab's template VM and follow steps similar to what is shown in the article [Expand a disk partition and filesystem](../virtual-machines/linux/expand-disks.md#expand-a-disk-partition-and-filesystem).
+If you expanded the disk *after* the OS was installed on the original Hyper-V VM, you might also need to extend the partition in Linux's filesystem to use the unallocated disk space:
+- Log in to the lab's template VM and follow steps similar to what is shown in [Expand a disk partition and filesystem](../virtual-machines/linux/expand-disks.md#expand-a-disk-partition-and-filesystem).
- The OS disk typically exists on the **/dev/sad2** partition. To view the current size of the OS disk's partition, use the following command: **df -h**.
+The OS disk typically exists on the **/dev/sad2** partition. To view the current size of the OS disk's partition, use the command **df -h**.
## Next steps * [Shared image gallery overview](../virtual-machines/shared-image-galleries.md) * [Attach or detach a shard image gallery](how-to-attach-detach-shared-image-gallery.md)
-* [How to use a shared image gallery](how-to-use-shared-image-gallery.md)
+* [Use a shared image gallery](how-to-use-shared-image-gallery.md)
lab-services How To Bring Custom Windows Image Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lab-services/how-to-bring-custom-windows-image-azure-vm.md
# Bring a Windows custom image from an Azure virtual machine
-The steps in this article show how to import a custom image that starts from an [Azure virtual machine (VM)](https://azure.microsoft.com/services/virtual-machines/). With this approach, you set up an image on an Azure VM and import the image into a shared image gallery so that it can be used within Lab Services. Before you use this approach for creating a custom image, read the article [Recommended approaches for creating custom images](approaches-for-custom-image-creation.md) to decide the best approach for your scenario.
+The steps in this article show how to import a custom image that starts from an [Azure virtual machine (VM)](https://azure.microsoft.com/services/virtual-machines/). With this approach, you set up an image on an Azure VM and import the image into a shared image gallery so that it can be used within Azure Lab Services. Before you use this approach for creating a custom image, read [Recommended approaches for creating custom images](approaches-for-custom-image-creation.md) to decide the best approach for your scenario.
## Prerequisites
-You will need permission to create an Azure VM in your school's Azure subscription to complete the steps in this article.
+You'll need permission to create an Azure VM in your school's Azure subscription to complete the steps in this article.
## Prepare a custom image on an Azure VM
-1. Create an Azure VM using the [Azure portal](../virtual-machines/windows/quick-create-portal.md), [PowerShell](../virtual-machines/windows/quick-create-powershell.md), the [Azure CLI](../virtual-machines/windows/quick-create-cli.md), or from an [ARM template](../virtual-machines/windows/quick-create-template.md).
+1. Create an Azure VM by using the [Azure portal](../virtual-machines/windows/quick-create-portal.md), [PowerShell](../virtual-machines/windows/quick-create-powershell.md), the [Azure CLI](../virtual-machines/windows/quick-create-cli.md), or an [Azure Resource Manager template](../virtual-machines/windows/quick-create-template.md).
- When you specify the disk settings, ensure the disk's size is *not* greater than 128 GB. 1. Install software and make any necessary configuration changes to the Azure VM's image.
-1. Optionally, you can generalize the image. Run [SysPrep](../virtual-machines/generalize.md#windows) if you need to create a generalized image. Otherwise, if you're creating a specialized image, you can skip to the next step.
+1. Optionally, you can generalize the image. Run [SysPrep](../virtual-machines/generalize.md#windows) if you need to create a generalized image. Otherwise, if you're creating a specialized image, you can skip to the next step.
- You should create a specialized image if you want to maintain machine-specific information and user profiles. For more information about the differences between generalized and specialized images, see [Generalized and specialized images](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images).
+ Create a specialized image if you want to maintain machine-specific information and user profiles. For more information about the differences between generalized and specialized images, see [Generalized and specialized images](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images).
## Import the custom image into a shared image gallery 1. In a shared image gallery, [create an image definition](../virtual-machines/windows/shared-images-portal.md#create-an-image-definition) or choose an existing image definition. - Choose **Gen 1** for the **VM generation**.
- - Choose whether you are creating a **specialized** or **generalized** image for the **Operating system state**.
+ - Choose whether you're creating a **specialized** or **generalized** image for the **Operating system state**.
For more information about the values you can specify for an image definition, see [Image definitions](../virtual-machines/shared-image-galleries.md#image-definitions).
You will need permission to create an Azure VM in your school's Azure subscripti
1. [Create an image version](../virtual-machines/windows/shared-images-portal.md#create-an-image-version). - The **Version number** property uses the following format: *MajorVersion.MinorVersion.Patch*.
- - For the **Source**, choose **Disks and/or snapshots** from the drop-down list.
+ - For the **Source**, select **Disks and/or snapshots** from the dropdown list.
- For the **OS disk** property, choose your Azure VM's disk that you created in previous steps.
- You can also import your custom image from an Azure VM to shared image gallery using PowerShell. See the following script and accompanying ReadMe for more information:
- - [Bring image to shared image gallery script](https://github.com/Azure/azure-devtestlab/tree/master/samples/ClassroomLabs/Scripts/BringImageToSharedImageGallery/)
+ You can also import your custom image from an Azure VM to a shared image gallery by using PowerShell. For more information, see the script and ReadMe in [Bring image to shared image gallery script](https://github.com/Azure/azure-devtestlab/tree/master/samples/ClassroomLabs/Scripts/BringImageToSharedImageGallery/).
## Create a lab
-1. [Create the lab](tutorial-setup-classroom-lab.md) in Lab Services and select the custom image from the shared image gallery.
+[Create the lab](tutorial-setup-classroom-lab.md) in Lab Services, and select the custom image from the shared image gallery.
## Next steps * [Shared image gallery overview](../virtual-machines/shared-image-galleries.md) * [Attach or detach a shard image gallery](how-to-attach-detach-shared-image-gallery.md)
-* [How to use a shared image gallery](how-to-use-shared-image-gallery.md)
+* [Use a shared image gallery](how-to-use-shared-image-gallery.md)
lighthouse Create Eligible Authorizations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/create-eligible-authorizations.md
To include eligible authorizations when you onboard a customer, use one of the t
|To onboard this (with eligible authorizations) |Use this Azure Resource Manager template |And modify this parameter file | ||||
-|Subscription |[subscription.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/subscription/subscription.json) |[subscription.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/subscription/subscription.Parameters.json) |
+|Subscription |[subscription.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/subscription/subscription.json) |[subscription.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/subscription/subscription.parameters.json) |
|Subscription (with approvers) |[subscription-managing-tenant-approvers.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/subscription/subscription-managing-tenant-approvers.json) |[subscription-managing-tenant-approvers.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/subscription/subscription-managing-tenant-approvers.parameters.json) | |Resource group |[rg.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/rg/rg.json) |[rg.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/rg/rg.parameters.json) | |Resource group (with approvers) |[rg-managing-tenant-approvers.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/rg/rg-managing-tenant-approvers.json) |[rg-managing-tenant-approvers.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management-eligible-authorizations/rg/rg-managing-tenant-approvers.parameters.json) |
lighthouse Onboard Customer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/lighthouse/how-to/onboard-customer.md
Title: Onboard a customer to Azure Lighthouse description: Learn how to onboard a customer to Azure Lighthouse, allowing their resources to be accessed and managed by users in your tenant. Previously updated : 08/16/2021 Last updated : 08/25/2021
This article explains how you, as a service provider, can onboard a customer to
You can repeat the onboarding process for multiple customers. When a user with the appropriate permissions signs in to your managing tenant, that user can be authorized across customer tenancy scopes to perform management operations, without having to sign in to every individual customer tenant.
-To track your impact across customer engagements and receive recognition, associate your Microsoft Partner Network (MPN) ID with at least one user account that has access to each of your onboarded subscriptions. You'll need to perform this association in your service provider tenant. We recommend creating a service principal account in your tenant that is associated with your MPN ID, then including that service principal every time you onboard a customer. For more info, see [Link your partner ID to enable partner earned credit on delegated resources](partner-earned-credit.md).
- > [!NOTE]
-> Customers can alternately be onboarded to Azure Lighthouse when they purchase a Managed Service offer (public or private) that you [publish to Azure Marketplace](publish-managed-services-offers.md). You can also use the onboarding process described here along with offers published to Azure Marketplace.
+> Customers can alternately be onboarded to Azure Lighthouse when they purchase a Managed Service offer (public or private) that you [publish to Azure Marketplace](publish-managed-services-offers.md). You can also use the onboarding process described here in conjunction with offers published to Azure Marketplace.
The onboarding process requires actions to be taken from within both the service provider's tenant and from the customer's tenant. All of these steps are described in this article.
To onboard a customer's tenant, it must have an active Azure subscription. You'l
- The tenant ID of the customer's tenant (which will have resources managed by the service provider). - The subscription IDs for each specific subscription in the customer's tenant that will be managed by the service provider (or that contains the resource group(s) that will be managed by the service provider).
-If you don't have these ID values already, you can retrieve them in one of the following ways. Be sure and use these exact values in your deployment.
-
-### Azure portal
-
-Your tenant ID can be seen by hovering over your account name on the upper right-hand side of the Azure portal, or by selecting **Switch directory**. To select and copy your tenant ID, search for "Azure Active Directory" from within the portal, then select **Properties** and copy the value shown in the **Directory ID** field. To find the ID of a subscription in the customer tenant, search for "Subscriptions" and then select the appropriate subscription ID.
-
-### PowerShell
-
-```azurepowershell-interactive
-# Log in first with Connect-AzAccount if you're not using Cloud Shell
-
-Select-AzSubscription <subscriptionId>
-```
-
-### Azure CLI
-
-```azurecli-interactive
-# Log in first with az login if you're not using Cloud Shell
-
-az account set --subscription <subscriptionId/name>
-az account show
-```
-
-> [!NOTE]
-> When onboarding a subscription (or one or more resource groups within a subscription) using the process described here, the **Microsoft.ManagedServices** resource provider will be registered for that subscription.
+If you don't know the ID for a tenant, you can [retrieve it by using the Azure portal, Azure PowerShell, or Azure CLI](../../active-directory/fundamentals/active-directory-how-to-find-tenant.md).
## Define roles and permissions
As a service provider, you may want to perform multiple tasks for a single custo
> [!NOTE] > Unless explicitly specified, references to a "user" in the Azure Lighthouse documentation can apply to an Azure AD user, group, or service principal in an authorization.
-To make management easier, we recommend using Azure AD user groups for each role whenever possible, rather than to individual users. This gives you the flexibility to add or remove individual users to the group that has access, so that you don't have to repeat the onboarding process to make user changes. You can also assign roles to a service principal, which can be useful for automation scenarios.
-
-> [!IMPORTANT]
-> In order to add permissions for an Azure AD group, the **Group type** must be set to **Security**. This option is selected when the group is created. For more information, see [Create a basic group and add members using Azure Active Directory](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
-
-When defining your authorizations, be sure to follow the principle of least privilege so that users only have the permissions needed to complete their job. For information about supported roles and best practices, see [Tenants, users, and roles in Azure Lighthouse scenarios](../concepts/tenants-users-roles.md).
+To define authorizations, you'll need to know the ID values for each user, user group, or service principal in the managing tenant to which you want to grant access. You can [retrieve these IDs by using the Azure portal, Azure PowerShell, or Azure CLI](../../role-based-access-control/role-assignments-template.md#get-object-ids) from within the managing tenant. You'll also need the role definition ID for each [built-in role](../../role-based-access-control/built-in-roles.md) you want to assign.
> [!TIP]
-> You can also create *eligible authorizations* that let users in your managing tenant temporarily elevate their role. This feature is currently in public preview and has specific licensing requirements. For more information, see [Create eligible authorizations](create-eligible-authorizations.md).
-
-To define authorizations, you'll need to know the ID values for each user, user group, or service principal in the service provider tenant to which you want to grant access. You'll also need the role definition ID for each built-in role you want to assign. If you don't have them already, you can retrieve them by running the commands below from within the service provider tenant.
-
-### PowerShell
-
-```azurepowershell-interactive
-# Log in first with Connect-AzAccount if you're not using Cloud Shell
-
-# To retrieve the objectId for an Azure AD group
-(Get-AzADGroup -DisplayName '<yourGroupName>').id
-
-# To retrieve the objectId for an Azure AD user
-(Get-AzADUser -UserPrincipalName '<yourUPN>').id
-
-# To retrieve the objectId for an SPN
-(Get-AzADApplication -DisplayName '<appDisplayName>' | Get-AzADServicePrincipal).Id
-
-# To retrieve role definition IDs
-(Get-AzRoleDefinition -Name '<roleName>').id
-```
-
-### Azure CLI
-
-```azurecli-interactive
-# Log in first with az login if you're not using Cloud Shell
+> We recommend assigning the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) when onboarding a customer, so that users in your tenant can [remove access to the delegation](remove-delegation.md) later if needed. If this role is not assigned, delegated resources can only be removed by a user in the customer's tenant.
-# To retrieve the objectId for an Azure AD group
-az ad group list --query "[?displayName == '<yourGroupName>'].objectId" --output tsv
+We recommend using Azure AD user groups for each role whenever possible, rather than to individual users. This gives you the flexibility to add or remove individual users to the group that has access, so that you don't have to repeat the onboarding process to make user changes. You can also assign roles to a service principal, which can be useful for automation scenarios.
-# To retrieve the objectId for an Azure AD user
-az ad user show --id "<yourUPN>" --query "objectId" --output tsv
+> [!IMPORTANT]
+> In order to add permissions for an Azure AD group, the **Group type** must be set to **Security**. This option is selected when the group is created. For more information, see [Create a basic group and add members using Azure Active Directory](../../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
-# To retrieve the objectId for an SPN
-az ad sp list --query "[?displayName == '<spDisplayName>'].objectId" --output tsv
+When defining your authorizations, be sure to follow the principle of least privilege so that users only have the permissions needed to complete their job. For information about supported roles and best practices, see [Tenants, users, and roles in Azure Lighthouse scenarios](../concepts/tenants-users-roles.md).
-# To retrieve role definition IDs
-az role definition list --name "<roleName>" | grep name
-```
+To track your impact across customer engagements and receive recognition, associate your Microsoft Partner Network (MPN) ID with at least one user account that has access to each of your onboarded subscriptions. You'll need to perform this association in your service provider tenant. We recommend creating a service principal account in your tenant that is associated with your MPN ID, then including that service principal every time you onboard a customer. For more info, see [Link your partner ID to enable partner earned credit on delegated resources](partner-earned-credit.md).
> [!TIP]
-> We recommend assigning the [Managed Services Registration Assignment Delete Role](../../role-based-access-control/built-in-roles.md#managed-services-registration-assignment-delete-role) when onboarding a customer, so that users in your tenant can [remove access to the delegation](remove-delegation.md) later if needed. If this role is not assigned, delegated resources can only be removed by a user in the customer's tenant.
+> You can also create *eligible authorizations* that let users in your managing tenant temporarily elevate their role. This feature is currently in public preview and has specific licensing requirements. For more information, see [Create eligible authorizations](create-eligible-authorizations.md).
## Create an Azure Resource Manager template
The template you choose will depend on whether you are onboarding an entire subs
|To onboard this |Use this Azure Resource Manager template |And modify this parameter file | ||||
-|Subscription |[delegatedResourceManagement.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management/delegatedResourceManagement.json) |[delegatedResourceManagement.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management/delegatedResourceManagement.parameters.json) |
-|Resource group |[rgDelegatedResourceManagement.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/rg-delegated-resource-management/rgDelegatedResourceManagement.json) |[rgDelegatedResourceManagement.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/rg-delegated-resource-management/rgDelegatedResourceManagement.parameters.json) |
-|Multiple resource groups within a subscription |[multipleRgDelegatedResourceManagement.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/rg-delegated-resource-management/multipleRgDelegatedResourceManagement.json) |[multipleRgDelegatedResourceManagement.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/rg-delegated-resource-management/multipleRgDelegatedResourceManagement.parameters.json) |
+|Subscription |[delegatedResourceManagement.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management/rg/rg.json) |[delegatedResourceManagement.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management/rg/rg.parameters.json) |
+|Resource group |[rgDelegatedResourceManagement.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management/rg/rg.json) |[rgDelegatedResourceManagement.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management/rg/rg.parameters.json) |
+|Multiple resource groups within a subscription |[multipleRgDelegatedResourceManagement.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management/rg/multi-rg.json) |[multipleRgDelegatedResourceManagement.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/delegated-resource-management/rg/multiple-rg.parameters.json) |
|Subscription (when using an offer published to Azure Marketplace) |[marketplaceDelegatedResourceManagement.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/marketplace-delegated-resource-management/marketplaceDelegatedResourceManagement.json) |[marketplaceDelegatedResourceManagement.parameters.json](https://github.com/Azure/Azure-Lighthouse-samples/blob/master/templates/marketplace-delegated-resource-management/marketplaceDelegatedResourceManagement.parameters.json) | If you want to include [eligible authorizations](create-eligible-authorizations.md#create-eligible-authorizations-using-azure-resource-manager-templates) (currently in public preview), select the corresponding template from the [delegated-resource-management-eligible-authorizations section of our samples repo](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/delegated-resource-management-eligible-authorizations).
If you want to include [eligible authorizations](create-eligible-authorizations.
> [!TIP] > While you can't onboard an entire management group in one deployment, you can deploy a policy to [onboard each subscription in a management group](onboard-management-group.md). You'll then have access to all of the subscriptions in the management group, although you'll have to work on them as individual subscriptions (rather than taking actions on the management group resource directly).
-The following example shows a modified **delegatedResourceManagement.parameters.json** file that can be used to onboard a subscription. The resource group parameter files (located in the [rg-delegated-resource-management](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/rg-delegated-resource-management) folder) are similar, but also include an **rgName** parameter to identify the specific resource group(s) to be onboarded.
+The following example shows a modified **delegatedResourceManagement.parameters.json** file that can be used to onboard a subscription. The resource group parameter files (located in the [rg-delegated-resource-management](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/delegated-resource-management/rg) folder) are similar, but also include an **rgName** parameter to identify the specific resource group(s) to be onboarded.
```json {
The last authorization in the example above adds a **principalId** with the User
Once you have created your template, a user in the customer's tenant must deploy it within their tenant. A separate deployment is needed for each subscription that you want to onboard (or for each subscription that contains resource groups that you want to onboard).
+When onboarding a subscription (or one or more resource groups within a subscription) using the process described here, the **Microsoft.ManagedServices** resource provider will be registered for that subscription.
+ > [!IMPORTANT] > This deployment must be done by a non-guest account in the customer's tenant who has a role with the `Microsoft.Authorization/roleAssignments/write` permission, such as [Owner](../../role-based-access-control/built-in-roles.md#owner), for the subscription being onboarded (or which contains the resource groups that are being onboarded). To find users who can delegate the subscription, a user in the customer's tenant can select the subscription in the Azure portal, open **Access control (IAM)**, and [view all users with the Owner role](../../role-based-access-control/role-assignments-list-portal.md#list-owners-of-a-subscription). >
load-balancer Manage Rules How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/manage-rules-how-to.md
+
+ Title: Manage rules for Azure Load Balancer - Azure portal
+description: In this article, learn how to manage rules for Azure Load Balancer using the Azure portal
++++ Last updated : 08/23/2021+++
+# Manage rules for Azure Load Balancer using the Azure portal
+
+Azure Load Balancer supports rules to configure traffic to the backend pool. In this article, you'll learn how to manage the rules for an Azure Load Balancer.
+
+There are four types of rules:
+
+* **Load-balancing rules** - A load balancer rule is used to define how incoming traffic is distributed to the **all** the instances within the backend pool. A load-balancing rule maps a given frontend IP configuration and port to multiple backend IP addresses and ports. An example would be a rule created on port 80 to load balance web traffic.
+
+* **High availability ports** - A load balancer rule configured with **protocol - all** and **port - 0**. These rules enable a single rule to load-balance all TCP and UDP traffic that arrive on all ports of an internal standard load balancer. The HA ports load-balancing rules help you with scenarios, such as high availability and scale for network virtual appliances (NVAs) inside virtual networks. The feature can help when a large number of ports must be load-balanced.
+
+* **Inbound NAT rule** - An inbound NAT rule forwards incoming traffic sent to frontend IP address and port combination. The traffic is sent to a **specific** virtual machine or instance in the backend pool. Port forwarding is done by the same hash-based distribution as load balancing.
+
+* **Outbound rule** - An outbound rule configures outbound Network Address Translation (NAT) for **all** virtual machines or instances identified by the backend pool. This rule enables instances in the backend to communicate (outbound) to the internet or other endpoints.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- A standard public load balancer in your subscription. For more information on creating an Azure Load Balancer, see [Quickstart: Create a public load balancer to load balance VMs using the Azure portal](quickstart-load-balancer-standard-public-portal.md). The load balancer name for the examples in this article is **myLoadBalancer**.
+
+- A standard internal load balancer in your subscription. For more information on creating an Azure Load Balancer, see [Quickstart: Create a internal load balancer to load balance VMs using the Azure portal](quickstart-load-balancer-standard-internal-portal.md). The load balancer name for the examples in this article is **myLoadBalancer**.
+
+## Load-balancing rules
+
+In this section, you'll learn how to add and remove a load-balancing rule. A public load balancer is used in the examples.
+
+### Add a load-balancing rule
+
+In this example, you'll create a rule to load balance port 80.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Load balancing rules** in **Settings**.
+
+5. Select **+ Add** in **Load balancing rules** to add a rule.
+
+ :::image type="content" source="./media/manage-rules-how-to/load-balancing-rules.png" alt-text="Screenshot of the load-balancing rules page in a standard load balancer." border="true":::
+
+6. Enter or select the following information in **Add load balancing rule**.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | **myHTTPRule** |
+ | IP Version | Select **IPv4** or **IPv6**. |
+ | Frontend IP address | Select the frontend IP address of the load balancer. <br> In this example, it's **myFrontendIP**. |
+ | Protocol | Leave the default of **TCP**. |
+ | Port | Enter **80**. |
+ | Backend port | Enter **80**. |
+ | Backend pool | Select the backend pool of the load balancer. </br> In this example, it's **myBackendPool**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest at the defaults or tailor to your requirements. </br> Select **OK**. |
+ | Session persistence | Select **None** or your required persistence. </br> For more information about distribution modes, see [Azure Load Balancer distribution modes](load-balancer-distribution-mode.md). |
+ | Idle timeout (minutes) | Leave the default of **4** or move the slider to your required idle timeout. |
+ | TCP reset | Select **Enabled**. </br> For more information on TCP reset, see [Load Balancer TCP Reset and Idle Timeout](load-balancer-tcp-reset.md). |
+ | Floating IP | Leave the default of **Disabled** or enable if your deployment requires floating IP. </br> For information on floating IP, see [Azure Load Balancer Floating IP configuration](load-balancer-floating-ip.md). |
+ | Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** </br> For more information on outbound rules and (SNAT), see [Outbound rules Azure Load Balancer](outbound-rules.md) and [Using Source Network Address Translation (SNAT) for outbound connections](load-balancer-outbound-connections.md).|
+
+7. Select **Add**.
+
+ :::image type="content" source="./media/manage-rules-how-to/add-load-balancing-rule.png" alt-text="Screenshot of the add load balancer rule page." border="true":::
+
+### Remove a load-balancing rule
+
+In this example, you'll remove a load-balancing rule.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Load balancing rules** in **Settings**.
+
+5. Select the three dots next to the rule you want to remove.
+
+6. Select **Delete**.
+
+ :::image type="content" source="./media/manage-rules-how-to/remove-load-balancing-rule.png" alt-text="Screenshot of removing a load-balancing rule." border="true":::
+
+## High availability ports
+
+In this section, you'll learn how to add and remove a high availability ports rule. You'll use an internal load balancer in this example.
+
+HA ports rules are supported on a standard internal load balancer.
+
+### Add high availability ports rule
+
+In this example, you'll create a high availability ports rule.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Load balancing rules** in **Settings**.
+
+5. Select **+ Add** in **Load balancing rules** to add a rule.
+
+ :::image type="content" source="./media/manage-rules-how-to/load-balancing-rules.png" alt-text="Screenshot of the load-balancing rules page in a standard load balancer." border="true":::
+
+6. Enter or select the following information in **Add load balancing rule**.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHARule**. |
+ | IP Version | Select **IPv4** or **IPv6**. |
+ | Frontend IP address | Select the frontend IP address of the load balancer. <br> In this example, it's **myFrontendIP**. </br> Select the box next to **HA Ports**. |
+ | Backend pool | Select the backend pool of the load balancer. </br> In this example, it's **myBackendPool**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Enter a TCP port in **Port**. In this example, it's port **80**. Enter a port that meets your requirements. </br> Leave the rest at the defaults or tailor to your requirements. </br> Select **OK**. |
+ | Session persistence | Select **None** or your required persistence. </br> For more information about distribution modes, see [Azure Load Balancer distribution modes](load-balancer-distribution-mode.md). |
+ | Idle timeout (minutes) | Leave the default of **4** or move the slider to your required idle timeout. |
+ | TCP reset | Select **Enabled**. </br> For more information on TCP reset, see [Load Balancer TCP Reset and Idle Timeout](load-balancer-tcp-reset.md). |
+ | Floating IP | Leave the default of **Disabled** or enable if your deployment requires floating IP. </br> For information on floating IP, see [Azure Load Balancer Floating IP configuration](load-balancer-floating-ip.md). |
+
+ For more information on HA ports rule configuration, see **[High availability ports overview](load-balancer-ha-ports-overview.md)**.
+
+7. Select **Add**.
+
+ :::image type="content" source="./media/manage-rules-how-to/add-ha-ports-load-balancing-rule.png" alt-text="Screenshot of the add load balancer HA ports rule page." border="true":::
+
+### Remove a high availability ports rule
+
+In this example, you'll remove a load-balancing rule.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Load balancing rules** in **Settings**.
+
+5. Select the three dots next to the rule you want to remove.
+
+6. Select **Delete**.
+
+ :::image type="content" source="./media/manage-rules-how-to/remove-ha-ports-load-balancing-rule.png" alt-text="Screenshot of removing a HA ports load-balancing rule." border="true":::
+
+## Inbound NAT rule
+
+Inbound NAT rules are used to route connections to a specific VM in the backend pool. For more information and a detailed tutorial on configuring and testing inbound NAT rules, see [Tutorial: Configure port forwarding in Azure Load Balancer using the portal](tutorial-load-balancer-port-forwarding-portal.md).
+
+## Outbound rule
+
+You'll learn how to add and remove an outbound rule in this section. You'll use a public load balancer in this example.
+
+Outbound rules are supported on standard public load balancers.
+
+### Add outbound rule
+
+In this example, you'll create an outbound rule.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Outbound rules** in **Settings**.
+
+5. Select **+ Add** in **Outbound rules** to add a rule.
+
+ :::image type="content" source="./media/manage-rules-how-to/outbound-rules.png" alt-text="Screenshot of the outbound rules page in a standard load balancer." border="true":::
+
+6. Enter or select the following information in **Add outbound rule**.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myOutboundRule**. |
+ | Frontend IP address | Select the frontend IP address of the load balancer. <br> In this example, it's **myFrontendIP**. |
+ | Protocol | Leave the default of **All**. |
+ | Idle timeout (minutes) | Leave the default of **4** or move the slider to meet your requirements. |
+ | TCP Reset | Leave the default of **Enabled**. |
+ | Backend pool | Select the backend pool of the load balancer. </br> In this example, it's **myBackendPool**. |
+ | **Port allocation** | |
+ | Port allocation | Select **Manually choose number of outbound ports**. |
+ | **Outbound ports** | |
+ | Choose by | Select **Ports per instance**. |
+ | Ports per instance | Enter **10000**. |
+
+7. Select **Add**.
+
+ :::image type="content" source="./media/manage-rules-how-to/add-outbound-rule.png" alt-text="Screenshot of the add outbound rule page." border="true":::
+
+### Remove an outbound rule
+
+In this example, you'll remove an outbound rule.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Outbound rules** in **Settings**.
+
+5. Select the three dots next to the rule you want to remove.
+
+6. Select **Delete**.
+
+ :::image type="content" source="./media/manage-rules-how-to/remove-outbound-rule.png" alt-text="Screenshot of removing an outbound rule." border="true":::
+
+## Next steps
+
+In this article, you learned how to managed load-balancing rules for an Azure Load Balancer.
+
+For more information about Azure Load Balancer, see:
+- [What is Azure Load Balancer?](load-balancer-overview.md)
+- [Frequently asked questions - Azure Load Balancer](load-balancer-faqs.yml)
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 08/02/2021 Last updated : 08/25/2021 # Limits and configuration reference for Azure Logic Apps
-> For Power Automate, see [Limits and configuration in Power Automate](/flow/limits-and-config).
+> For Power Automate, see [Limits and configuration in Power Automate](/power-automate/limits-and-config).
This article describes the limits and configuration information for Azure Logic Apps and related resources. To create logic app workflows, you choose the **Logic App** resource type based on your scenario, solution requirements, the capabilities that you want, and the environment where you want to run your workflows.
If your environment has strict network requirements or firewalls that limit traf
> [!NOTE] > If you're using [Power Automate](/power-automate/getting-started), some actions, such as **HTTP** and **HTTP + OpenAPI**, > go directly through the Azure Logic Apps service and come from the IP addresses that are listed here. For more information
-> about the IP addresses used by Power Automate, see [Limits and configuration for Power Automate](/flow/limits-and-config#ip-address-configuration).
+> about the IP addresses used by Power Automate, see [Limits and configuration for Power Automate](/power-automate/limits-and-config#ip-address-configuration).
For example, suppose your logic apps are deployed in the West US region. To support calls that your logic apps send or receive through built-in triggers and actions, such as the [HTTP trigger or action](../connectors/connectors-native-http.md), your firewall needs to allow access for *all* the Azure Logic Apps service inbound IP addresses *and* outbound IP addresses that exist in the West US region.
This section lists the outbound IP addresses for the Azure Logic Apps service. I
| UAE North | 40.123.230.45, 40.123.231.179, 40.123.231.186, 40.119.166.152, 40.123.228.182, 40.123.217.165, 40.123.216.73, 40.123.212.104 | | UK South | 51.140.74.14, 51.140.73.85, 51.140.78.44, 51.140.137.190, 51.140.153.135, 51.140.28.225, 51.140.142.28, 51.140.158.24 | | UK West | 51.141.54.185, 51.141.45.238, 51.141.47.136, 51.141.114.77, 51.141.112.112, 51.141.113.36, 51.141.118.119, 51.141.119.63 |
-| West Central US | 52.161.27.190, 52.161.18.218, 52.161.9.108, 13.78.151.161, 13.78.137.179, 13.78.148.140, 13.78.129.20, 13.78.141.75 |
+| West Central US | 52.161.27.190, 52.161.18.218, 52.161.9.108, 13.78.151.161, 13.78.137.179, 13.78.148.140, 13.78.129.20, 13.78.141.75, 13.71.199.128 - 13.71.199.159 |
| West Europe | 40.68.222.65, 40.68.209.23, 13.95.147.65, 23.97.218.130, 51.144.182.201, 23.97.211.179, 104.45.9.52, 23.97.210.126, 13.69.71.160, 13.69.71.161, 13.69.71.162, 13.69.71.163, 13.69.71.164, 13.69.71.165, 13.69.71.166, 13.69.71.167 | | West India | 104.211.164.80, 104.211.162.205, 104.211.164.136, 104.211.158.127, 104.211.156.153, 104.211.158.123, 104.211.154.59, 104.211.154.7 | | West US | 52.160.92.112, 40.118.244.241, 40.118.241.243, 157.56.162.53, 157.56.167.147, 104.42.49.145, 40.83.164.80, 104.42.38.32, 13.86.223.0, 13.86.223.1, 13.86.223.2, 13.86.223.3, 13.86.223.4, 13.86.223.5 |
logic-apps Quickstart Create First Logic App Workflow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/quickstart-create-first-logic-app-workflow.md
Title: Quickstart - Create integration workflows with Azure Logic Apps in the Azure portal
+ Title: Quickstart - Create automated workflows with Azure Logic Apps in the Azure portal
description: Create your first automated integration workflow with multi-tenant Azure Logic Apps in the Azure portal. ms.suite: integration Previously updated : 05/25/2021 Last updated : 08/24/2021 # Customer intent: As a developer, I want to create my first automated integration workflow by using Azure Logic Apps in the Azure portal # Quickstart: Create an integration workflow with multi-tenant Azure Logic Apps and the Azure portal
-This quickstart shows how to create an example automated workflow that integrates two services, an RSS feed for a website and an email account, when you use *multi-tenant* [Azure Logic Apps](logic-apps-overview.md). While this example is cloud-based, Azure Logic Apps supports workflows that connect apps, data, services, and systems across cloud, on premises, and hybrid environments. For more information about multi-tenant versus single-tenant model, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
+This quickstart shows how to create an example automated workflow that integrates two services, an RSS feed for a website and an email account, when you use [*multi-tenant* Azure Logic Apps](logic-apps-overview.md). While this example is cloud-based, Azure Logic Apps supports workflows that connect apps, data, services, and systems across cloud, on premises, and hybrid environments. For more information about multi-tenant versus single-tenant Azure Logic Apps, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
-In this example, you create a workflow that uses the RSS connector and the Office 365 Outlook connector. The RSS connector has a trigger that checks an RSS feed, based on a schedule. The Office 365 Outlook connector has an action that sends an email for each new item. The connectors in this example are only two among the [hundreds of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) that you can use in a workflow.
+In this example, you create a logic app resource and workflow that uses the RSS connector and the Office 365 Outlook connector. The resource runs in multi-tenant Azure Logic Apps and is based on the [Consumption pricing model](logic-apps-pricing.md#consumption-pricing). The RSS connector has a trigger that checks an RSS feed, based on a schedule. The Office 365 Outlook connector has an action that sends an email for each new item. The connectors in this example are only two among the [hundreds of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) that you can use in a workflow.
The following screenshot shows the high-level example workflow:
The following screenshot shows the high-level example workflow:
As you progress through this quickstart, you'll learn these basic steps:
-* Create a logic app resource that runs in the multi-tenant Logic Apps service environment.
+* Create a logic app resource that runs in the multi-tenant Azure Logic Apps environment.
* Select the blank logic app template. * Add a trigger that specifies when to run the workflow. * Add an action that performs a task after the trigger fires. * Run your workflow.
-To create and manage a logic app using other tools, review these other Logic Apps quickstarts:
+To create and manage a logic app resource using other tools, review these other Azure Logic Apps quickstarts:
* [Create and manage logic apps in Visual Studio Code](quickstart-create-logic-apps-visual-studio-code.md) * [Create and manage logic apps in Visual Studio](quickstart-create-logic-apps-with-visual-studio.md)
To create and manage a logic app using other tools, review these other Logic App
## Prerequisites
-* If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An email account from a service that works with Azure Logic Apps, such as Office 365 Outlook or Outlook.com. For other supported email providers, review [Connectors for Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
+* An email account from a service that works with Azure Logic Apps, such as Office 365 Outlook or Outlook.com. For other supported email providers, review [Connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
> [!NOTE] > If you want to use the [Gmail connector](/connectors/gmail/), only G Suite accounts can use this connector without restriction in Azure
To create and manage a logic app using other tools, review these other Logic App
> [create a Google client app to use for authentication with your Gmail connector](/connectors/gmail/#authentication-and-bring-your-own-application). > For more information, see [Data security and privacy policies for Google connectors in Azure Logic Apps](../connectors/connectors-google-data-security-privacy-policy.md).
-* If you have a firewall that limits traffic to specific IP addresses, set up your firewall to allow access for *both* the [inbound](logic-apps-limits-and-config.md#inbound) and [outbound](logic-apps-limits-and-config.md#outbound) IP addresses used by the Logic Apps service in the Azure region where your logic app exists.
+* If you have a firewall that limits traffic to specific IP addresses, set up your firewall to allow access for *both* the [inbound](logic-apps-limits-and-config.md#inbound) and [outbound](logic-apps-limits-and-config.md#outbound) IP addresses used by the Logic Apps service in the Azure region where you create your logic app workflow.
- This example also uses the RSS and Office 365 Outlook connectors, which are [managed by Microsoft](../connectors/managed.md). These connectors require that you set up your firewall to allow access for *all* the [managed connector outbound IP addresses](logic-apps-limits-and-config.md#outbound) in the logic app's Azure region.
+ This example uses the RSS and Office 365 Outlook connectors, which are [managed by Microsoft](../connectors/managed.md). These connectors require that you set up your firewall to allow access for *all* the [managed connector outbound IP addresses](/connectors/common/outbound-ip-addresses) in the Azure region for your logic app resource.
<a name="create-logic-app-resource"></a>
To create and manage a logic app using other tools, review these other Logic App
1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
-1. In the Azure search box, enter `logic apps`, and select **Logic Apps**.
+1. In the Azure search box, enter `logic apps`, and select **Logic apps**.
![Screenshot that shows Azure portal search box with "logic apps" as the search term and "Logic Apps" as the selected result.](./media/quickstart-create-first-logic-app-workflow/find-select-logic-apps.png)
-1. On the **Logic Apps** page, select **Add** > **Consumption**.
+1. On the **Logic apps** page, select **Add**.
- This step creates a logic app resource that runs in the multi-tenant Logic Apps service environment and uses a [consumption pricing model](logic-apps-pricing.md).
+ ![Screenshot showing the Azure portal and Logic Apps service page and "Add" option selected.](./media/quickstart-create-first-logic-app-workflow/add-new-logic-app.png)
- ![Screenshot showing the Azure portal and Logic Apps service page with logic apps list, "Add" menu opened, and "Consumption" selected.](./media/quickstart-create-first-logic-app-workflow/add-new-logic-app.png)
-
-1. On the **Logic App** pane, provide basic details and settings for your logic app. Create a new [resource group](../azure-resource-manager/management/overview.md#terminology) for this example logic app.
+1. On the **Create Logic App** pane, select the Azure subscription to use, create a new [resource group](../azure-resource-manager/management/overview.md#terminology) for your logic app resource, and provide basic details about your logic app resource.
| Property | Value | Description | |-|-|-| | **Subscription** | <*Azure-subscription-name*> | The name of your Azure subscription. |
- | **Resource group** | <*Azure-resource-group-name*> | The [Azure resource group](../azure-resource-manager/management/overview.md#terminology) name, which must be unique across regions. This example uses "My-First-LA-RG". |
- | **Logic app name** | <*logic-app-name*> | Your logic app's name, which must be unique across regions. This example uses "My-First-Logic-App". <p><p>**Important**: This name can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). |
- | **Region** | <*Azure-region*> | The Azure datacenter region where to store your app's information. This example uses "West US". |
- | **Associate with integration service environment** | Off | Select this option only when you want to deploy this logic app to an [integration service environment](connect-virtual-network-vnet-isolated-environment-overview.md). For this example, leave this option unselected. |
- | **Enable log analytics** | Off | Select this option only when you want to enable diagnostic logging. For this example, leave this option unselected. |
+ | **Resource Group** | <*Azure-resource-group-name*> | The [Azure resource group](../azure-resource-manager/management/overview.md#terminology) name, which must be unique across regions. This example uses "My-First-LA-RG". |
+ | **Type** | **Consumption** | The logic app resource type and billing model to use for your resource: <p><p>- **Consumption**: This logic app resource type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](logic-apps-pricing.md#consumption-pricing). This example uses this **Consumption** model. <p>- **Standard**: This logic app resource type runs in single-tenant Azure Logic Apps and uses the [Standard billing model](logic-apps-pricing.md#standard-pricing). |
+ | **Logic App name** | <*logic-app-name*> | Your logic app resource name, which must be unique across regions. This example uses "My-First-Logic-App". <p><p>**Important**: This name can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). |
+ | **Region** | <*Azure-region*> | The Azure datacenter region where to store your app's information. This example uses "West US". <p>**Note**: If your subscription is associated with an [integration service environment](connect-virtual-network-vnet-isolated-environment-overview.md), this list includes those environments. |
+ | **Enable log analytics** | **No** | Change this option only when you want to enable diagnostic logging. For this example, leave this option unselected. |
||||
- ![Screenshot showing the Azure portal and logic app creation page with details for new logic app.](./media/quickstart-create-first-logic-app-workflow/create-logic-app-settings.png)
+ ![Screenshot showing the Azure portal and logic app resource creation page with details for new logic app.](./media/quickstart-create-first-logic-app-workflow/create-logic-app-settings.png)
1. When you're ready, select **Review + Create**. On the validation page, confirm the details that you provided, and select **Create**. ## Select the blank template
-1. After Azure successfully deploys your app, select **Go to resource**. Or, find and select your logic app by typing the name in the Azure search box.
+1. After Azure successfully deploys your app, select **Go to resource**. Or, find and select your logic app resource by typing the name in the Azure search box.
![Screenshot showing the resource deployment page and selected button, "Go to resource".](./media/quickstart-create-first-logic-app-workflow/go-to-new-logic-app-resource.png)
- The Logic Apps Designer opens and shows a page with an introduction video and commonly used triggers.
+ The workflow designer opens and shows a page with an introduction video and commonly used triggers.
1. Under **Templates**, select **Blank Logic App**.
- ![Screenshot showing the Logic Apps Designer template gallery and selected template, "Blank Logic App".](./media/quickstart-create-first-logic-app-workflow/choose-logic-app-template.png)
+ ![Screenshot showing the workflow designer, template gallery, and selected template, "Blank Logic App".](./media/quickstart-create-first-logic-app-workflow/choose-logic-app-template.png)
After you select the template, the designer now shows an empty workflow surface.
To create and manage a logic app using other tools, review these other Logic App
## Add the trigger
-A workflow always starts with a single [trigger](../logic-apps/logic-apps-overview.md#how-do-logic-apps-work), which specifies the condition to meet before running any actions in the workflow. Each time the trigger fires, the Logic Apps service creates and runs a workflow instance. If the trigger doesn't fire, no instance is created nor run. You can start a workflow by choosing from many different triggers.
+A workflow always starts with a single [trigger](../logic-apps/logic-apps-overview.md#how-do-logic-apps-work), which specifies the condition to meet before running any actions in the workflow. Each time the trigger fires, Azure Logic Apps creates and runs a workflow instance. If the trigger doesn't fire, no instance is created nor run. You can start a workflow by choosing from many different triggers.
This example uses an RSS trigger that checks an RSS feed, based on a schedule. If a new item exists in the feed, the trigger fires, and a new workflow instance starts to run. If multiple new items exist between checks, the trigger fires for each item, and a separate new workflow instance runs for each item.
-1. In the **Logic Apps Designer**, under the search box, select **All**.
+1. In the workflow designer, under the search box, select **All**.
1. To find the RSS trigger, in the search box, enter `rss`. From the **Triggers** list, select the RSS trigger, **When a feed item is published**.
- ![Screenshot showing the Logic Apps Designer with "rss" in the search box and the selected RSS trigger, "When a feed item is published".](./media/quickstart-create-first-logic-app-workflow/add-rss-trigger-new-feed-item.png)
+ ![Screenshot showing the workflow designer with "rss" in the search box and the selected RSS trigger, "When a feed item is published".](./media/quickstart-create-first-logic-app-workflow/add-rss-trigger-new-feed-item.png)
1. In the trigger details, provide the following information:
This example uses an Office 365 Outlook action that sends an email each time tha
| `Link:` | **Primary feed link** | The URL for the item | ||||
- ![Screenshot showing the Logic Apps Designer, the "Send an email" action, and selected properties inside the "Body" box.](./media/quickstart-create-first-logic-app-workflow/send-email-body.png)
+ ![Screenshot showing the workflow designer, the "Send an email" action, and selected properties inside the "Body" box.](./media/quickstart-create-first-logic-app-workflow/send-email-body.png)
1. Save your logic app. On the designer toolbar, select **Save**.
This example uses an Office 365 Outlook action that sends an email each time tha
## Run your workflow
-To check that the workflow runs correctly, you can wait for the trigger to check the RSS feed based on the set schedule. Or, you can manually run the workflow by selecting **Run** on the Logic Apps Designer toolbar, as shown in the following screenshot.
+To check that the workflow runs correctly, you can wait for the trigger to check the RSS feed based on the set schedule. Or, you can manually run the workflow by selecting **Run** on the workflow designer toolbar, as shown in the following screenshot.
-![Screenshot showing the Logic Apps Designer and the "Run" button selected on the designer toolbar.](./media/quickstart-create-first-logic-app-workflow/run-logic-app-test.png)
+![Screenshot showing the workflow designer and the "Run" button selected on the designer toolbar.](./media/quickstart-create-first-logic-app-workflow/run-logic-app-test.png)
-If the RSS feed has new items, your workflow sends an email for each new item. Otherwise, your workflow waits until the next interval to check the RSS feed again.
+If the RSS feed has new items, your workflow sends an email for each new item. Otherwise, your workflow waits until the next interval to check the RSS feed again.
The following screenshot shows a sample email that's sent by the example workflow. The email includes the details from each trigger output that you selected plus the descriptive text that you included for each item.
The following screenshot shows a sample email that's sent by the example workflo
If you don't receive emails from the workflow as expected: * Check your email account's junk or spam folder, in case the message was incorrectly filtered.+ * Make sure the RSS feed you're using has published items since the last scheduled or manual check. ## Clean up resources
-When you're done with this quickstart, clean up the sample logic app and any related resources by deleting the resource group that you created for this example.
+When you're done with this quickstart, delete the sample logic app resource and any related resources by deleting the resource group that you created for this example.
1. In the Azure search box, enter `resource groups`, and then select **Resource groups**.
logic-apps Workflow Definition Language Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/workflow-definition-language-functions-reference.md
Last updated 08/16/2021
# Reference guide to using functions in expressions for Azure Logic Apps and Power Automate
-For workflow definitions in [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and [Power Automate](/flow/getting-started), some [expressions](../logic-apps/logic-apps-workflow-definition-language.md#expressions) get their values from runtime actions that might not yet exist when your workflow starts running. To reference these values or process the values in these expressions, you can use *functions* provided by the [Workflow Definition Language](../logic-apps/logic-apps-workflow-definition-language.md).
+For workflow definitions in [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and [Power Automate](/power-automate/getting-started), some [expressions](../logic-apps/logic-apps-workflow-definition-language.md#expressions) get their values from runtime actions that might not yet exist when your workflow starts running. To reference these values or process the values in these expressions, you can use *functions* provided by the [Workflow Definition Language](../logic-apps/logic-apps-workflow-definition-language.md).
> [!NOTE] > This reference page applies to both Azure Logic Apps and Power Automate, but appears in the > Azure Logic Apps documentation. Although this page refers specifically to logic app workflows, > these functions work for both flows and logic app workflows. For more information about functions
-> and expressions in Power Automate, see [Use expressions in conditions](/flow/use-expressions-in-conditions).
+> and expressions in Power Automate, see [Use expressions in conditions](/power-automate/use-expressions-in-conditions).
For example, you can calculate values by using math functions, such as the [add()](../logic-apps/workflow-definition-language-functions-reference.md#add) function, when you want the sum from integers or floats. Here are other example tasks that you can perform with functions:
machine-learning Apply Sql Transformation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/apply-sql-transformation.md
Previously updated : 11/12/2020 Last updated : 08/11/2021 # Apply SQL Transformation
This section contains implementation details, tips, and answers to frequently as
- An input is always required on port 1. - For column identifiers that contain a space or other special characters, always enclose the column identifier in square brackets or double quotation marks when referring to the column in the `SELECT` or `WHERE` clauses. +
+- If you have used **Edit Metadata** to specify the column metadata (categorical or fields) before **Apply SQL Transformation**, the outputs of **Apply SQL Transformation** will not contain these attributes. You need to use **Edit Metadata** to edit the column after **Apply SQL Transformation**.
### Unsupported statements
machine-learning Create Python Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/create-python-model.md
Previously updated : 06/18/2020 Last updated : 08/18/2021 # Create Python Model module
This article shows how to use **Create Python Model** with a simple pipeline. He
> [!NOTE] > Please pay extra attention to the comments in sample code of the script and make sure your script strictly follows the requirement, including the class name, methods as well as method signature. Violation will lead to exceptions.
+> **Create Python Model** only supports creating sklearn based model to be trained using **Train Model**.
The following sample code of the two-class Naive Bayes classifier uses the popular *sklearn* package:
machine-learning Execute Python Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/algorithm-module-reference/execute-python-script.md
def azureml_main(dataframe1 = None, dataframe2 = None):
## Upload files The Execute Python Script module supports uploading files by using the [Azure Machine Learning Python SDK](/python/api/azureml-core/azureml.core.run%28class%29#upload-file-name--path-or-stream-).
-The following example shows how to upload an image file in the Execute Python Script module:
+The following example shows how to upload an image file to the run record in the Execute Python Script module:
```Python
After the pipeline run is finished, you can preview the image in the right panel
> [!div class="mx-imgBorder"] > ![Preview of uploaded image](media/module/upload-image-in-python-script.png)
+You can also upload file to any datastore using following code. You can only preview the file in your storage account.
+```Python
+import pandas as pd
+
+# The entry point function MUST have two input arguments.
+# If the input port is not connected, the corresponding
+# dataframe argument will be None.
+# Param<dataframe1>: a pandas.DataFrame
+# Param<dataframe2>: a pandas.DataFrame
+def azureml_main(dataframe1 = None, dataframe2 = None):
+
+ # Execution logic goes here
+ print(f'Input pandas.DataFrame #1: {dataframe1}')
+
+ from matplotlib import pyplot as plt
+ import os
+
+ plt.plot([1, 2, 3, 4])
+ plt.ylabel('some numbers')
+ img_file = "line.png"
+
+ # Set path
+ path = "./img_folder"
+ os.mkdir(path)
+ plt.savefig(os.path.join(path,img_file))
+
+ # Get current workspace
+ from azureml.core import Run
+ run = Run.get_context(allow_offline=True)
+ ws = run.experiment.workspace
+
+ # Get a named datastore from the current workspace and upload to specified path
+ from azureml.core import Datastore
+ datastore = Datastore.get(ws, datastore_name='workspacefilestore')
+ datastore.upload(path)
+
+ return dataframe1,
+```
++ ## How to configure Execute Python Script The Execute Python Script module contains sample Python code that you can use as a starting point. To configure the Execute Python Script module, provide a set of inputs and Python code to run in the **Python script** text box.
machine-learning Concept Prebuilt Docker Images Inference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-prebuilt-docker-images-inference.md
Prebuilt Docker container images for inference [(preview)](https://azure.microso
## List of prebuilt Docker images for inference * All the docker images run as non-root user.
+* We recommend using `latest` tag for docker images. Prebuilt docker images for inference are published to Microsoft container registry (MCR), to query list of tags available, follow [instructions on their GitHub repository](https://github.com/microsoft/ContainerRegistry#browsing-mcr-content).
### TensorFlow
machine-learning How To Create Attach Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-attach-kubernetes.md
These methods of creating an AKS cluster use the __default__ version of the clus
When **attaching** an existing AKS cluster, we support all currently supported AKS versions.
+> [!IMPORTANT]
+> Currently, Azure machine learning does not support deploying models to AKS version **1.21.x**
+ > [!IMPORTANT] > Azure Kubernetes Service uses [Blobfuse FlexVolume driver](https://github.com/Azure/kubernetes-volume-drivers/blob/master/flexvolume/blobfuse/README.md) for the versions <=1.16 and [Blob CSI driver](https://github.com/kubernetes-sigs/blob-csi-driver/blob/master/README.md) for the versions >=1.17. > Therefore, it is important to re-deploy or [update the web service](how-to-deploy-update-web-service.md) after cluster upgrade in order to deploy to correct blobfuse method for the cluster version.
machine-learning How To Create Machine Learning Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-machine-learning-pipelines.md
else:
# Add some packages relied on by data prep step aml_run_config.environment.python.conda_dependencies = CondaDependencies.create( conda_packages=['pandas','scikit-learn'],
- pip_packages=['azureml-sdk', 'azureml-dataprep[fuse,pandas]'],
+ pip_packages=['azureml-sdk', 'azureml-dataset-runtime[fuse,pandas]'],
pin_sdk_version=False) ```
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-manage-compute-instance.md
You can also use the following environment variables in your script:
3. CI_NAME 4. CI_LOCAL_UBUNTU_USER. This points to azureuser
-You can use setup script in conjunction with Azure Policy to either enforce or default a setup script for every compute instance creation.
+You can use setup script in conjunction with **Azure Policy to either enforce or default a setup script for every compute instance creation**.
+The default value for setup script timeout is 15 minutes. This can be changed through Studio UI or through ARM templates using the DURATION parameter.
+DURATION is a floating point number with an optional suffix: 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days.
### Use the script in the studio
machine-learning How To Deploy Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-azure-kubernetes-service.md
When deploying to Azure Kubernetes Service, you deploy to an AKS cluster that is
- If you want to deploy models to GPU nodes or FPGA nodes (or any specific SKU), then you must create a cluster with the specific SKU. There is no support for creating a secondary node pool in an existing cluster and deploying models in the secondary node pool.
+> [!IMPORTANT]
+> Currently, Azure machine learning does not support deploying models to AKS version **1.21.x**
+ ## Understand the deployment processes The word "deployment" is used in both Kubernetes and Azure Machine Learning. "Deployment" has different meanings in these two contexts. In Kubernetes, a `Deployment` is a concrete entity, specified with a declarative YAML file. A Kubernetes `Deployment` has a defined lifecycle and concrete relationships to other Kubernetes entities such as `Pods` and `ReplicaSets`. You can learn about Kubernetes from docs and videos at [What is Kubernetes?](https://aka.ms/k8slearning).
marketplace Business Applications Isv Program https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/business-applications-isv-program.md
Last updated 11/19/2020
# Microsoft Business Applications Independent Software Vendor (ISV) Connect Program onboarding guide
-The [Business Applications ISV Connect Program](https://partner.microsoft.com/solutions/business-applications/isv-overview) aims to accelerate the growth and overall success of Independent Software Vendors (ISVs) building modern, cloud-based, line-of-business (LOB) solutions with Dynamics 365 Customer Engagement and PowerApps (Dynamics CE applications) or Dynamics 365 Finance and Operations (Dynamics Ops applications).
+The [Business Applications ISV Connect Program](https://partner.microsoft.com/solutions/business-applications/isv-overview) aims to accelerate the growth and overall success of Independent Software Vendors (ISVs) building modern, cloud-based, line-of-business (LOB) solutions with Dynamics 365 Customer Engagement and Power Apps (Dynamics CE applications) or Dynamics 365 Finance and Operations (Dynamics Ops applications).
To enroll and take advantage of all the technical, marketing, and sales enablement benefits of the Business Applications ISV Connect Program, complete the following sections in this article.
marketplace Co Sell Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-configure.md
The Co-sell option is available for the following offer types.
- Azure Container - Azure Virtual Machine - Consulting service-- Dynamics 365 for Customer Engagement & PowerApps
+- Dynamics 365 for Customer Engagement & Power Apps
- Dynamics 365 for operations - Dynamics 365 business central - IoT Edge Module
marketplace Co Sell Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-overview.md
Azure IP Co-sell incentive status can be applied to these offer types:
Business Applications Co-sell incentive (Standard and Premium) status can be applied to these offer types: -- Dynamics 365 for Customer Engagement & PowerApps
+- Dynamics 365 for Customer Engagement & Power Apps
- Dynamics 365 for operations Offers that achieve _Azure IP Co-sell incentivized_ status gain these commercial marketplace benefits:
marketplace Co Sell Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/co-sell-requirements.md
For an offer to achieve co-sell ready status, you must meet the following requir
**Business Applications ISVs**: -- Dynamics 365 & PowerApps (except Dynamics 365 Business Central) solutions require ISV Connect enrollment.
+- Dynamics 365 & Power Apps (except Dynamics 365 Business Central) solutions require ISV Connect enrollment.
### Complete the Co-sell with Microsoft tab
marketplace Determine Your Listing Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/determine-your-listing-type.md
This table shows which listing options are available for each offer type:
| Consulting service | | | &#10004; | | | Azure Container | | | | &#10004; | | Dynamics 365 business central | &#10004; | &#10004; | &#10004; | &#10004; |
-| Dynamics 365 Customer Engagement & PowerApps | &#10004; | &#10004; | &#10004; | &#10004; |
+| Dynamics 365 Customer Engagement & Power Apps | &#10004; | &#10004; | &#10004; | &#10004; |
| Dynamics 365 for operations | &#10004; | &#10004; | &#10004; | &#10004; | | IoT Edge module | | | | &#10004; | | Managed Service | | | | &#10004; |
This table shows which offer types support the pricing options that are included
| Consulting service | | | | | | Azure Container | &#10004;<sup>1</sup> | &#10004;<sup>1</sup> | | | | Dynamics 365 business central | &#10004; | | | |
-| Dynamics 365 Customer Engagement & PowerApps | &#10004; | | | |
+| Dynamics 365 Customer Engagement & Power Apps | &#10004; | | | |
| Dynamics 365 for operations | &#10004; | | | | | IoT Edge module | &#10004;<sup>1</sup> | &#10004;<sup>1</sup> | | | | Managed Service | | &#10004; | | |
The following table shows the options that are available for different offer typ
| SaaS | Both online stores | Both online stores | Both online stores | | Both online stores &#42; | | Microsoft 365 App | AppSource | AppSource | | | AppSource &#42;&#42; | | Dynamics 365 Business Central | AppSource | AppSource | | | |
-| Dynamics 365 for Customer Engagements & PowerApps | AppSource | AppSource | | | |
+| Dynamics 365 for Customer Engagements & Power Apps | AppSource | AppSource | | | |
| Dynamics 365 Operations | AppSource | AppSource | | | | | Power BI App | | | AppSource | | | |||||||
marketplace Dynamics 365 Customer Engage Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-customer-engage-availability.md
Title: Configure Dynamics 365 for Customer Engagement & PowerApps offer availability on Microsoft AppSource (Azure Marketplace).
-description: Configure Dynamics 365 for Customer Engagement & PowerApps offer availability on Microsoft AppSource (Azure Marketplace).
+ Title: Configure Dynamics 365 for Customer Engagement & Power Apps offer availability on Microsoft AppSource (Azure Marketplace).
+description: Configure Dynamics 365 for Customer Engagement & Power Apps offer availability on Microsoft AppSource (Azure Marketplace).
marketplace Dynamics 365 Customer Engage Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-customer-engage-offer-listing.md
Title: Configure Dynamics 365 for Customer Engagement & PowerApps offer listing details on Microsoft AppSource (Azure Marketplace)
-description: Configure Dynamics 365 for Customer Engagement & PowerApps offer listing details on Microsoft AppSource (Azure Marketplace).
+ Title: Configure Dynamics 365 for Customer Engagement & Power Apps offer listing details on Microsoft AppSource (Azure Marketplace)
+description: Configure Dynamics 365 for Customer Engagement & Power Apps offer listing details on Microsoft AppSource (Azure Marketplace).
marketplace Dynamics 365 Customer Engage Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-customer-engage-offer-setup.md
Title: Create a Dynamics 365 for Customer Engagement & PowerApps offer on Microsoft AppSource (Azure Marketplace).
-description: Create a Dynamics 365 for Customer Engagement & PowerApps offer on Microsoft AppSource (Azure Marketplace).
+ Title: Create a Dynamics 365 for Customer Engagement & Power Apps offer on Microsoft AppSource (Azure Marketplace).
+description: Create a Dynamics 365 for Customer Engagement & Power Apps offer on Microsoft AppSource (Azure Marketplace).
Last updated 04/30/2021
-# How to create a Dynamics 365 for Customer Engagement & PowerApps offer
+# How to create a Dynamics 365 for Customer Engagement & Power Apps offer
-This article describes how to create a Dynamics 365 for Customer Engagement & PowerApps offer. All offers for Dynamics 365 go through our certification process. The trial experience allows users to deploy your solution to a live Dynamics 365 environment.
+This article describes how to create a Dynamics 365 for Customer Engagement & Power Apps offer. All offers for Dynamics 365 go through our certification process. The trial experience allows users to deploy your solution to a live Dynamics 365 environment.
Before you start, create a commercial marketplace account in [Partner Center](./create-account.md) and ensure it is enrolled in the commercial marketplace program.
marketplace Dynamics 365 Customer Engage Plans https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-customer-engage-plans.md
Title: Create Dynamics 365 for Customer Engagement & Power Apps plans on Microsoft AppSource (Azure Marketplace).
-description: Configure Dynamics 365 for Customer Engagement & PowerApps offer plans if you chose to enable your offer for third-party app management.
+description: Configure Dynamics 365 for Customer Engagement & Power Apps offer plans if you chose to enable your offer for third-party app management.
marketplace Dynamics 365 Customer Engage Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-customer-engage-properties.md
Title: Configure Dynamics 365 for Customer Engagement & PowerApps offer properties on Microsoft AppSource (Azure Marketplace).
-description: Configure Dynamics 365 for Customer Engagement & PowerApps offer properties on Microsoft AppSource (Azure Marketplace).
+ Title: Configure Dynamics 365 for Customer Engagement & Power Apps offer properties on Microsoft AppSource (Azure Marketplace).
+description: Configure Dynamics 365 for Customer Engagement & Power Apps offer properties on Microsoft AppSource (Azure Marketplace).
marketplace Dynamics 365 Customer Engage Supplemental Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-customer-engage-supplemental-content.md
Title: Set up Dynamics 365 for Customer Engagement & PowerApps offer supplemental content on Microsoft AppSource (Azure Marketplace)
-description: Set up Dynamics 365 for Customer Engagement & PowerApps offer supplemental content on Microsoft AppSource (Azure Marketplace).
+ Title: Set up Dynamics 365 for Customer Engagement & Power Apps offer supplemental content on Microsoft AppSource (Azure Marketplace)
+description: Set up Dynamics 365 for Customer Engagement & Power Apps offer supplemental content on Microsoft AppSource (Azure Marketplace).
marketplace Dynamics 365 Customer Engage Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/dynamics-365-customer-engage-technical-configuration.md
Title: Set up Dynamics 365 for Customer Engagement & PowerApps offer technical configuration on Microsoft AppSource - Azure Marketplace
-description: Set up Dynamics 365 for Customer Engagement & PowerApps offer technical configuration on Microsoft AppSource (Azure Marketplace.
+ Title: Set up Dynamics 365 for Customer Engagement & Power Apps offer technical configuration on Microsoft AppSource - Azure Marketplace
+description: Set up Dynamics 365 for Customer Engagement & Power Apps offer technical configuration on Microsoft AppSource (Azure Marketplace.
marketplace Marketplace Dynamics 365 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/marketplace-dynamics-365.md
Last updated 04/30/2021
# Plan a Microsoft Dynamics 365 offer
-This article explains the different options and features of a Dynamics 365 offer in Microsoft AppSource in the commercial marketplace. AppSource includes offers that build on or extend Microsoft 365, Dynamics 365, PowerApps, and Power BI.
+This article explains the different options and features of a Dynamics 365 offer in Microsoft AppSource in the commercial marketplace. AppSource includes offers that build on or extend Microsoft 365, Dynamics 365, Power Apps, and Power BI.
Before you start, create a commercial marketplace account in [Partner Center](./create-account.md) and ensure it is enrolled in the commercial marketplace program. Also, review the [publishing process and guidelines](/office/dev/store/submit-to-appsource-via-partner-center).
marketplace Commercial Marketplace Lead Management Instructions Azure Table https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md
If your customer relationship management (CRM) system isn't explicitly supported
## (Optional) Use Power Automate to get lead notifications
-You can use [Power Automate](/flow/) to automate notifications every time a lead is added to your Azure Storage table. If you don't have an account, you can [sign up for a free account](https://flow.microsoft.com/).
+You can use [Power Automate](/power-automate/) to automate notifications every time a lead is added to your Azure Storage table. If you don't have an account, you can [sign up for a free account](https://flow.microsoft.com/).
### Lead notification example
marketplace Plans Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/plans-pricing.md
Plans are not supported for the following offer types:
- Consulting service - Dynamics 365 Business Central-- Dynamics 365 Customer Engagement & PowerApps
+- Dynamics 365 Customer Engagement & Power Apps
- Dynamics 365 for Operations - Power BI app
You can enable a free trial on plans for transactable Azure virtual machine and
> - Azure virtual machine > - SaaS > - Dynamics 365 Business Central
-> - Dynamics 365 for Customer Engagement & PowerApps
+> - Dynamics 365 for Customer Engagement & Power Apps
> - Dynamics 365 for Operations > > For more information about listing options, see [Determine your publishing option](determine-your-listing-type.md).
marketplace Test Drive Hosted Detailed Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/marketplace/test-drive-hosted-detailed-config.md
This article describes how to configure a hosted test drive for Dynamics 365 for
[![Selecting the 'Enable a test drive' check box.](media/test-drive/enable-test-drive-check-box.png)](media/test-drive/enable-test-drive-check-box.png#lightbox)
- - **Type of test drive** ΓÇô Choose **Microsoft Hosted (Dynamics 365 for Customer Engagement & PowerApps)**. This indicates that Microsoft will host and maintain the service that performs the test drive user provisioning and deprovisioning.
+ - **Type of test drive** ΓÇô Choose **Microsoft Hosted (Dynamics 365 for Customer Engagement & Power Apps)**. This indicates that Microsoft will host and maintain the service that performs the test drive user provisioning and deprovisioning.
5. Grant Microsoft AppSource permission to provision and deprovision test drive users in your tenant using [these instructions](./test-drive-azure-subscription-setup.md). In this step, you will generate the **Azure AD App ID** and **Azure AD App Key** values mentioned below. 6. Complete these fields on the **Test drive technical configuration** page.
media-services Concept Media Reserved Units https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/concept-media-reserved-units.md
Title: Media reserved units - Azure
description: Media reserved units allow you to scale media process and determine the speed of your media processing tasks. documentationcenter: ''-+ editor: ''
na ms.devlang: na Previously updated : 09/30/2020 Last updated : 08/25/2021
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-Azure Media Services enables you to scale media processing by managing Media Reserved Units (MRUs). An MRU provides additional computing capacity required for encoding media. The number of MRUs determine the speed with which your media tasks are processed and how many media tasks can be processed concurrently in an account. For example, if your account has five MRUs and there are tasks to be processed, then five media tasks can be running concurrently. Any remaining tasks will be queued up and can be picked up for processing sequentially when a running task finishes. Each MRU that you provision results in a capacity reservation but does not provide you with dedicated resource. During times of extremely high demand, all of your MRUs may not start processing immediately.
+Media Reserved Units (MRUs) were previously used in Azure Media Services v2 to control encoding concurrency and performance. You no longer need to manage MRUs or request quota increases for any media services account as the system will automatically scale up and down based on load. You will also see performance that is equal to or improved in comparison to using MRUs.
-A task is an individual operation of work on an Asset e.g. adaptive streaming encoding. When you submit a job, Media Services will take care of breaking out the job into individual operations (i.e. tasks) that will then be associated with separate MRUs.
-
-## Choosing between different reserved unit types
-
-The following table helps you make a decision when choosing between different encoding speeds. It shows the duration of encoding for a 7 minute, 1080p video depending on the MRU used.
-
-|RU type|Scenario|Example results for the 7 min 1080p video |
-||||
-| **S1**|Single bitrate encoding. <br/>Files at SD or below resolutions, not time sensitive, low cost.|Encoding to single bitrate SD resolution MP4 file using ΓÇ£H264 Single Bitrate SD 16x9ΓÇ¥ takes around 7 minutes.|
-| **S2**|Single bitrate and multiple bitrate encoding.<br/>Normal usage for both SD and HD encoding.|Encoding with "H264 Single Bitrate 720p" preset takes around 6 minutes.<br/><br/>Encoding with "H264 Multiple Bitrate 720p" preset takes around 12 minutes.|
-| **S3**|Single bitrate and multiple bitrate encoding.<br/>Full HD and 4K resolution videos. Time sensitive, faster turnaround encoding.|Encoding with "H264 Single Bitrate 1080p" preset takes approximately 3 minutes.<br/><br/>Encoding with "H264 Multiple Bitrate 1080p" preset takes approximately 8 minutes.|
-
-> [!NOTE]
-> If you do not provision MRUΓÇÖs for your account, your media tasks will be processed with the performance of an S1 MRU and tasks will be picked up sequentially. No processing capacity is reserved so the wait time between one task finishing and the next one starting will depend on the availability of resources in the system.
-
-## Considerations
-
-* For Audio Analysis and Video Analysis jobs that are triggered by Media Services v3 or Azure Video Analyzer for Media, provisioning the account with ten S3 units is highly recommended. If you need more than 10 S3 MRUs, open a support ticket using the [Azure portal](https://portal.azure.com/).
-* For encoding tasks that don't have MRUs, there is no upper bound to the time your tasks can spend in queued state, and at most only one task will be running at a time.
+If you have an account that was created using a version prior to the 2020-05-01 API, you will still have access to APIΓÇÖs for managing MRUs, however none of the MRU configuration that you set will be used to control encoding concurrency or performance. If you donΓÇÖt see the option to manage MRUs in the Azure portal, you have an account that was created with the 2020-05-01 API or later.
## Billing
-You are charged based on number of minutes the Media Reserved Units are provisioned in your account, whether or not there are any jobs running. For a detailed explanation, see the FAQ section of the [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/) page.
+While there were previously charges for Media Reserved Units, as of April 17, 2021 there are no longer any charges for accounts that have configuration for Media Reserved Units. For more information on billing for encoding jobs, please see [Encoding video and audio with Media Services](encoding-concept.md)
-## Next step
-[Scale Media Reserved Units with CLI](media-reserved-units-cli-how-to.md)
-[Analyze videos](analyze-videos-tutorial.md)
+For accounts created in with the **2020-05-01** version of the API (i.e. the v3 version) or through the Azure portal, scaling and media reserved units are no longer required. Scaling is now automatically handled by the service internally. Media reserved units are no longer needed or supported for any Azure Media Services account. See [Media reserved units (legacy)](concept-media-reserved-units.md) for additional information.
## See also
-* [Quotas and limits](limits-quotas-constraints-reference.md)
+* [Migrate from Media Services v2 to v3](migrate-v-2-v-3-migration-introduction.md)
+* [Scale Media Reserved Units with CLI](media-reserved-units-cli-how-to.md)
media-services Limits Quotas Constraints Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/limits-quotas-constraints-reference.md
Title: Quotas and limits in Azure Media Services
description: This topic describes quotas and limits in Microsoft Azure Media Services. documentationcenter: ''-+ editor: '' Previously updated : 10/23/2020 Last updated : 08/25/2021
This article lists some of the most common Microsoft Azure Media Services limits
<sup>1</sup> The maximum size supported for a single blob is currently up to 5 TB in Azure Blob Storage. Additional limits apply in Media Services based on the VM sizes that are used by the service. The size limit applies to the files that you upload and also the files that get generated as a result of Media Services processing (encoding or analyzing). If your source file is larger than 260-GB, your Job will likely fail.
-The following table shows the limits on the media reserved units S1, S2, and S3. If your source file is larger than the limits defined in the table, your encoding job fails. If you encode 4K resolution sources of long duration, you're required to use S3 media reserved units to achieve the performance needed. If you have 4K content that's larger than the 260-GB limit on the S3 media reserved units, open a support ticket.
-
-|Media reserved unit type|Maximum input size (GB)|
-|||
-|S1 | 26|
-|S2 | 60|
-|S3 |260|
- <sup>2</sup> The storage accounts must be from the same Azure subscription. ## Jobs (encoding & analyzing) limits
media-services Media Reserved Units Cli How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/media-reserved-units-cli-how-to.md
Title: Scale Media Reserved Units (MRUs) CLI description: This topic shows how to use CLI to scale media processing with Azure Media Services. -+ Previously updated : 03/22/2021 Last updated : 08/25/2021
-# How to scale media reserved units
+# How to scale media reserved units (legacy)
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-This article shows you how to scale Media Reserved Units (MRSs) for faster encoding.
+This article shows you how to scale Media Reserved Units (MRUs) for faster encoding.
> [!WARNING] > This command will no longer work for Media Services accounts that are created with the 2020-05-01 version of the API or later. For these accounts media reserved units are no longer needed as the system will automatically scale up and down based on load. If you donΓÇÖt see the option to manage MRUs in the Azure portal, youΓÇÖre using an account that was created with the 2020-05-01 API or later.
+> The purpose of this article is to document the legacy process of using MRUs
## Prerequisites
az ams account mru set -n amsaccount -g amsResourceGroup --count 10 --type S3
## Billing
-You are charged based on number of minutes the Media Reserved Units are provisioned in your account. This occurs independent of whether there are any Jobs running in your account. For a detailed explanation, see the FAQ section of the [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/) page.
-
-## Next step
-
-[Analyze videos](analyze-videos-tutorial.md)
+ While there were previously charges for Media Reserved Units, as of April 17, 2021 there are no longer any charges for accounts that have configuration for Media Reserved Units.
## See also
-* [Quotas and limits](limits-quotas-constraints-reference.md)
+* [Migrate from Media Services v2 to v3](migrate-v-2-v-3-migration-introduction.md)
media-services Migrate V 2 V 3 Migration Scenario Based Media Reserved Units https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/latest/migrate-v-2-v-3-migration-scenario-based-media-reserved-units.md
Title: Media Reserved Units (MRUs) migration guidance description: This article gives you MRU scenario based guidance that will assist you in migrating from Azure Media Services V2 to V3. -+ Previously updated : 03/25/2021 Last updated : 08/25/2021
-# Media Reserved Units (MRUs) scenario-based migration guidance
+# Media reserved units migration guidance
![migration guide logo](./media/migration-guide/azure-media-services-logo-migration-guide.svg)
This article gives you MRU scenario based guidance that will assist you in migrating from Azure Media Services V2 to V3. -- For new V3 accounts created in the Azure portal, or with the 2020-05-01 version of the V3 API, you no longer are required to set Media Reserved Units (MRUs). The system will now automatically scale up and down based on load.-- If you have a V3 or V2 account that was created before the 2020-05-01 version of the API, you can still control the concurrency and performance of your jobs using Media Reserved Units. For more information, see Scaling Media Processing. You can manage the MRUs using CLI 2.0 for Media Services V3, or using the Azure portal. -- If you don't see the option to manage MRUs in the Azure portal, you're running an account that was created with the 2020-05-01 API or later.-- If you are familiar with setting your MRU type to S3, your performance will improve or remain the same.-- If you are an existing V2 customer, you need to create a new V2 account to support your existing application prior to the completion of migration. -- Indexer V1 or other media processors that are not fully deprecated yet may need to be enabled again.
+> [!Important]
+> Media reserved units are no longer needed for any Media services account as the system will automatically scale up and down based on load.
+
+## Scenario guidance
+
+Please migrate your MRUs based on the following scenarios:
+
+* For all Media Services accounts, you no longer are required to set Media Reserved Units (MRUs). The system will now automatically scale up and down based on load.
+* If you have an account that was created before the 2020-05-01 version of the API, you can still Have access to APIΓÇÖs for managing MRUΓÇÖs, however none of the MRU configuration that you set will be used to control encoding concurrency or performance. For more information, see [Scaling Media Processing](../previous/media-services-scale-media-processing-overview.md). You can manage the MRUs using CLI 2.0 for Media Services V3, or using the Azure portal.
+* If you don't see the option to manage MRUs in the Azure portal, you're running an account that was created with the 2020-05-01 API or later.
+* If you are familiar with setting your MRU type to S3, your performance will improve or remain the same with the removal with MRUs.
+* If you are an existing V2 customer, you need to create a new V3 account to support your existing application prior to the completion of migration.
+* Indexer V1 or other media processors that are not fully deprecated yet may need to be enabled again.
For more information about MRUs, see [Media Reserved Units](concept-media-reserved-units.md) and [How to scale media reserved units](media-reserved-units-cli-how-to.md).
media-services Legacy Components https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/legacy-components.md
Title: Azure Media Services legacy components | Microsoft Docs
description: This topic discusses Azure Media Services legacy components. documentationcenter: ''-+ editor: ''
na ms.devlang: na Previously updated : 07/26/2021 Last updated : 08/24/2021 # Azure Media Services legacy components
The following Media Analytics media processors are either deprecated or soon to
| **Media processor name** | **Retirement date** | **Additional notes** | | | | |
-| Azure Media Indexer 2 | January 1st, 2020 | This media processor will be replaced by the [Media Services v3 AudioAnalyzerPreset Basic mode](../latest/analyze-video-audio-files-concept.md). For more information, see [Migrate from Azure Media Indexer 2 to Azure Video Analyzer for Media (formerly Video Indexer)](migrate-indexer-v1-v2.md). |
-| Azure Media Indexer | March 1, 2023 | This media processor will be replaced by the [Media Services v3 AudioAnalyzerPreset Basic mode](../latest/analyze-video-audio-files-concept.md). For more information, see [Migrate from Azure Media Indexer 2 to Azure Video Analyzer for Media](migrate-indexer-v1-v2.md). |
+| Azure Media Indexer | January 1st, 2020 | This media processor will be replaced by the [Media Services v3 AudioAnalyzerPreset Basic mode](../latest/analyze-video-audio-files-concept.md). For more information, see [Migrate from Azure Media Indexer 2 to Azure Video Analyzer for Media](migrate-indexer-v1-v2.md) (formerly Video Indexer). |
+| Azure Media Indexer 2 | March 1, 2023 | This media processor will be replaced by the [Media Services v3 AudioAnalyzerPreset Basic mode](../latest/analyze-video-audio-files-concept.md). For more information, see [Migrate from Azure Media Indexer 2 to Azure Video Analyzer for Media](migrate-indexer-v1-v2.md) (formerly Video Indexer). |
| Motion Detection | June 1st, 2020|No replacement plans at this time. | | Video Summarization |June 1st, 2020|No replacement plans at this time.| | Video Optical Character Recognition | June 1st, 2020 |This media processor was replaced by Azure Video Analyzer for Media. Also, consider using [Azure Media Services v3 API](../latest/analyze-video-audio-files-concept.md). <br/>See [Compare Azure Media Services v3 presets and Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md). | | Face Detector | June 1st, 2020 | This media processor was replaced by Azure Video Analyzer for Media. Also, consider using [Azure Media Services v3 API](../latest/analyze-video-audio-files-concept.md). <br/>See [Compare Azure Media Services v3 presets and Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md). | | Content Moderator | June 1st, 2020 |This media processor was replaced by Azure Video Analyzer for Media. Also, consider using [Azure Media Services v3 API](../latest/analyze-video-audio-files-concept.md). <br/>See [Compare Azure Media Services v3 presets and Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/compare-video-indexer-with-media-services-presets.md). |
+| Media Encoder Premium Workflow | February 29, 2024 | The AMS v2 API no longer supports the Premium Encoder. If you previously used the workflow-based Premium Encoder for HEVC encoding, you should migrate to the [new v3 Standard Encoder](../latest/encode-media-encoder-standard-formats-reference.md) with HEVC encoding support. <br/> If you require advanced workflow features of the Premium Encoder, you're encouraged to start using an Azure advanced encoding partner from [Imagine Communications](https://imaginecommunications.com/), [Telestream](https://telestream.net), or [Bitmovin](https://bitmovin.com). |
## Next steps
media-services Media Services Dotnet Encoding Units https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-dotnet-encoding-units.md
Title: Scale media processing by adding encoding units - Azure | Microsoft Docs
description: This article demonstrates how to add encoding units with Azure Media Services .NET. documentationcenter: ''-+ editor: '' ms.assetid: 33f7625a-966a-4f06-bc09-bccd6e2a42b5
na ms.devlang: na Previously updated : 03/10/2021 Last updated : 08/24/2021
> * [REST](/rest/api/media/operations/encodingreservedunittype) > * [Java](https://github.com/rnrneverdies/azure-sdk-for-media-services-java-samples) > * [PHP](https://github.com/Azure/azure-sdk-for-php/tree/master/examples/MediaServices)
->
->
## Overview > [!IMPORTANT]
-> Make sure to review the [Overview](media-services-scale-media-processing-overview.md) to get more information about scaling media processing.
->
->
-
-To change the reserved unit type and the number of encoding reserved units using .NET SDK, do the following:
-
-```csharp
-IEncodingReservedUnit encodingS1ReservedUnit = _context.EncodingReservedUnits.FirstOrDefault();
-encodingS1ReservedUnit.ReservedUnitType = ReservedUnitType.Basic; // Corresponds to S1
-encodingS1ReservedUnit.Update();
-Console.WriteLine("Reserved Unit Type: {0}", encodingS1ReservedUnit.ReservedUnitType);
-
-encodingS1ReservedUnit.CurrentReservedUnits = 2;
-encodingS1ReservedUnit.Update();
-
-Console.WriteLine("Number of reserved units: {0}", encodingS1ReservedUnit.CurrentReservedUnits);
-```
-
-## Opening a Support Ticket
-
-By default every Media Services account can scale to up to 10 S2 or S3 Media Reserved Units (MRUs) or 25 S1 MRUs, and 5 On-Demand Streaming Reserved Units. You can request a higher limit by opening a support ticket.
+> By default, Media Reserve Units are no longer needed to be used and are not supported by Azure Media Services. Make sure to review the [Overview](media-services-scale-media-processing-overview.md) to get more information about scaling media processing.
## Media Services learning paths [!INCLUDE [media-services-learning-paths-include](../../../includes/media-services-learning-paths-include.md)]
media-services Media Services Portal Scale Media Processing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-portal-scale-media-processing.md
Title: Scale media processing using the Azure portal | Microsoft Docs
description: This tutorial walks you through the steps of scaling media processing using the Azure portal. documentationcenter: ''-+ editor: '' ms.assetid: e500f733-68aa-450c-b212-cf717c0d15da
na ms.devlang: na Previously updated : 03/10/2021 Last updated : 08/24/2021 # Change the reserved unit type
## Overview
-A Media Services account is associated with a Reserved Unit Type, which determines the speed with which your media processing tasks are processed. You can pick between the following reserved unit types: **S1**, **S2**, or **S3**. For example, the same encoding job runs faster when you use the **S2** reserved unit type compare to the **S1** type.
-
-In addition to specifying the reserved unit type, you can specify to provision your account with **Reserved Units** (RUs). The number of provisioned RUs determines the number of media tasks that can be processed concurrently in a given account.
-
->[!NOTE]
->RUs work for parallelizing all media processing, including indexing jobs using Azure Media Indexer. However, unlike encoding, indexing jobs do not get processed faster with faster reserved units.
+By default, Media Reserve Units are no longer needed to be used and are not supported by Azure Media Services. For compatibility purposes, the current Azure portal has an option for you to manage and scale MRUs. However, by default, none of the MRU configurations that you set will be used to control encoding concurrency or performance.
> [!IMPORTANT] > Make sure to review the [overview](media-services-scale-media-processing-overview.md) topic to get more information about scaling media processing topic.
->
->
## Scale media processing
+>[!NOTE]
+>Selecting MRUs will not affect concurrency or performance in Azure Media Services V3.
+ To change the reserved unit type and the number of reserved units, do the following: 1. In the [Azure portal](https://portal.azure.com/), select your Azure Media Services account.
media-services Media Services Quotas And Limitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-quotas-and-limitations.md
Title: Media Services quotas and limitation | Microsoft Docs
description: This topic describes quotas and limitations associated with Microsoft Azure Media Services. documentationcenter: ''-+ editor: '' ms.assetid: d4c43afd-dba8-40a2-ad92-6de54152f7ec
na ms.devlang: na Previously updated : 03/10/2021 Last updated : 08/24/2021 # Quotas and Limitations
media-services Media Services Scale Media Processing Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/media-services/previous/media-services-scale-media-processing-overview.md
Title: Media reserved units overview | Microsoft Docs
description: This article is an overview of scaling Media Processing with Azure Media Services. documentationcenter: ''-+ editor: ''
na ms.devlang: na Previously updated : 3/10/2021 Last updated : 08/24/2021 # Media reserved units [!INCLUDE [media services api v2 logo](./includes/v2-hr.md)]
-Azure Media Services enables you to scale media processing by managing Media Reserved Units (MRUs). An MRU provides additional computing capacity required for encoding media. The number of MRUs determine the speed with which your media tasks are processed and how many media tasks can be processed concurrently in an account. For example, if your account has five MRUs and there are tasks to be processed, then five media tasks can be running concurrently. Any remaining tasks will be queued up and can be picked up for processing sequentially when a running task finishes. Each MRU that you provision results in a capacity reservation but does not provide you with dedicated resource. During times of extremely high demand, all of your MRUs may not start processing immediately.
+Media Reserved Units (MRUs) were previously used to control encoding concurrency and performance. MRUs are only being used for the following legacy media processors that are to be deprecated soon. See [Azure Media Services legacy components](legacy-components.md) for retirement info for these legacy processors:
-## Choosing between different reserved unit types
+* Media Encoder Premium Workflow
+* Media Indexer V1 and V2
-The following table helps you make a decision when choosing between different encoding speeds. It shows the duration of encoding for a 7 minute, 1080p video depending on the MRU used.
-
-|RU type|Scenario|Example results for the 7 min 1080p video |
-||||
-| **S1**|Single bitrate encoding. <br/>Files at SD or below resolutions, not time sensitive, low cost.|Encoding to single bitrate SD resolution MP4 file using ΓÇ£H264 Single Bitrate SD 16x9ΓÇ¥ takes around 7 minutes.|
-| **S2**|Single bitrate and multiple bitrate encoding.<br/>Normal usage for both SD and HD encoding.|Encoding with "H264 Single Bitrate 720p" preset takes around 6 minutes.<br/><br/>Encoding with "H264 Multiple Bitrate 720p" preset takes around 12 minutes.|
-| **S3**|Single bitrate and multiple bitrate encoding.<br/>Full HD and 4K resolution videos. Time sensitive, faster turnaround encoding.|Encoding with "H264 Single Bitrate 1080p" preset takes approximately 3 minutes.<br/><br/>Encoding with "H264 Multiple Bitrate 1080p" preset takes approximately 8 minutes.|
-
-> [!NOTE]
-> If you do not provision MRUΓÇÖs for your account, your media tasks will be processed with the performance of an S1 MRU and tasks will be picked up sequentially. No processing capacity is reserved so the wait time between one task finishing and the next one starting will depend on the availability of resources in the system.
-
-## Considerations
-
-* For Audio Analysis and Video Analysis jobs that are triggered by Media Services v3 or Video Indexer, provisioning the account with ten S3 units is highly recommended. If you need more than 10 S3 MRUs, open a support ticket using the [Azure portal](https://portal.azure.com/).
-* For encoding tasks that don't have MRUs, there is no upper bound to the time your tasks can spend in queued state, and at most only one task will be running at a time.
+For all other media processors, you no longer need to manage MRUs or request quota increases for any media services account as the system will automatically scale up and down based on load. You will also see performance that is equal to or improved in comparison to using MRUs.
## Billing
-You are charged based on number of minutes the Media Reserved Units are provisioned in your account. This occurs independent of whether there are any Jobs running in your account. For a detailed explanation, see the FAQ section of the [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/) page.
+While there were previously charges for Media Reserved Units, as of April 17, 2021 there are no longer any charges for accounts that have configuration for Media Reserved Units.
-## Quotas and limitations
+## Scaling MRUs
-For information about quotas and limitations and how to open a support ticket, see [Quotas and limitations](media-services-quotas-and-limitations.md).
-
-## Next steps
-
-Try scaling media processing with one of these technologies:
+For compatibility purposes, you can continue to use the Azure portal or the following APIs to manage and scale MRUs:
[.NET](media-services-dotnet-encoding-units.md) [Portal](media-services-portal-scale-media-processing.md) [REST](/rest/api/media/operations/encodingreservedunittype) [Java](https://github.com/rnrneverdies/azure-sdk-for-media-services-java-samples)
-[PHP](https://github.com/Azure/azure-sdk-for-php/tree/master/examples/MediaServices)
+[PHP](https://github.com/Azure/azure-sdk-for-php/tree/master/examples/MediaServices)
+
+However, by default none of the MRU configuration that you set will be used to control encoding concurrency or performance. The only exception to this configuration is if you are encoding with one of the following legacy media processors: Media Encoder Premium Workflow or Media Indexer V1.
open-datasets Dataset Gnomad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-gnomad.md
To download all the VCFs recursively:
$ azcopy cp --recursive=true https://datasetgnomad.blob.core.windows.net/dataset/release/3.0/vcf/genomes . ```
+**NEW: Parquet format of gnomAD v2.1.1 VCF files (exomes and genomes)**
+
+To view the parquet files:
+
+```powershell
+$ azcopy ls https://datasetgnomadparquet.blob.core.windows.net/dataset
+```
+
+To download all the parquet files recursively:
+
+```powershell
+$ cp --recursive=true https://datasetgnomadparquet.blob.core.windows.net/dataset
+```
+ The [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) is also a useful tool for browsing the list of files in the gnomAD release. ## Use Terms
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-firewall-rules.md
Title: Firewall rules - Azure Database for PostgreSQL - Flexible Server
-description: This article describes how to use firewall rules to connect to Azure Database for PostgreSQL - Flexible Server with public networking deployment option.
+ Title: Firewall rules in Azure Database for PostgreSQL - Flexible Server
+description: This article describes how to use firewall rules to connect to Azure Database for PostgreSQL - Flexible Server with the public networking deployment option.
Last updated 07/21/2021 # Firewall rules in Azure Database for PostgreSQL - Flexible Server
-You can choose two main networking options when running your Azure Database for PostgreSQL ΓÇô Flexible Server . The options are private access (VNet integration) and public access (allowed IP addresses). With public access flexible server is accessed through a public endpoint.
-When public access option is chosen, Azure Database for PostgreSQL server is secure by default preventing all access to your database server until you specify which IP hosts are allowed to access it. The firewall grants access to the server based on the originating IP address of each request.
-To configure your firewall, you create firewall rules that specify ranges of acceptable IP addresses. You can create firewall rules at the server level.
-**Firewall rules:** These rules enable clients to access your entire Azure Database for PostgreSQL Server, that is, all the databases within the same logical server. Server-level firewall rules can be configured by using the Azure portal or using Azure CLI commands. To create server-level firewall rules, you must be the subscription owner or a subscription contributor.
+When you're running Azure Database for PostgreSQL - Flexible Server, you have two main networking options. The options are private access (virtual network integration) and public access (allowed IP addresses).
-## Firewall overview
-All access to your Azure Database for PostgreSQL server is blocked by the firewall by default. To access your server from another computer/client or application, you need to specify one or more server-level firewall rules to enable access to your server. Use the firewall rules to specify allowed public IP address ranges. Access to the Azure portal website itself is not impacted by the firewall rules.
-Connection attempts from the internet and Azure must first pass through the firewall before they can reach your PostgreSQL Database, as shown in the following diagram:
+With public access, the Azure Database for PostgreSQL server is accessed through a public endpoint. By default, the firewall blocks all access to the server. To specify which IP hosts can access the server, you create server-level *firewall rules*. Firewall rules specify allowed public IP address ranges. The firewall grants access to the server based on the originating IP address of each request.
+You can create firewall rules by using the Azure portal or by using Azure CLI commands. You must be the subscription owner or a subscription contributor.
+
+Server-level firewall rules apply to all databases on the same Azure Database for PostgreSQL server. The rules don't affect access to the Azure portal website.
+
+The following diagram shows how connection attempts from the internet and Azure must pass through the firewall before they can reach PostgreSQL databases:
++
+## Connect from the internet
+If the source IP address of the request is within one of the ranges specified in the server-level firewall rules, the connection is granted. Otherwise, it's rejected.
+
+For example, if your application connects with a Java Database Connectivity (JDBC) driver for PostgreSQL, you might encounter this error because the firewall is blocking the connection:
-## Connecting from the Internet
-Server-level firewall rules apply to all databases on the same Azure Database for PostgreSQL server.
-If the source IP address of the request is within one of the ranges specified in the server-level firewall rules, the connection is granted otherwise it is rejected. For example, if your application connects with JDBC driver for PostgreSQL, you may encounter this error attempting to connect when the firewall is blocking the connection.
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: > org.postgresql.util.PSQLException: FATAL: no pg\_hba.conf entry for host "123.45.67.890", user "adminuser", database "postgresql", SSL
-## Connecting from Azure
-It is recommended that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service or use a public IP tied to a virtual machine or other resource (see below for info on connecting with a virtual machine's private IP over service endpoints).
+## Connect from Azure
+We recommend that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service app, or use a public IP address that's tied to a virtual machine.
+
+If a fixed outgoing IP address isn't available for your Azure service, consider enabling connections from all IP addresses for Azure datacenters:
-If a fixed outgoing IP address isn't available for your Azure service, you can consider enabling connections from all Azure datacenter IP addresses. This setting can be enabled from the Azure portal by setting the **Allow public access from any Azure service within Azure to this server** option to **ON** from the **Networking** pane and hitting **Save**. From the Azure CLI, a firewall rule setting with starting and ending address equal to 0.0.0.0 does the equivalent. If the connection attempt is rejected by firewall rules, it does not reach the Azure Database for PostgreSQL server.
+1. In the Azure portal, on the **Networking** pane, select the **Allow public access from any Azure service within Azure to this server** checkbox.
+1. Select **Save**.
> [!IMPORTANT]
-> The **Allow public access from any Azure service within Azure to this server** option configures the firewall to allow all connections from Azure including connections from the subscriptions of other customers. When selecting this option, make sure your login and user permissions limit access to only authorized users.
->
+> The **Allow public access from any Azure service within Azure to this server** option configures the firewall to allow all connections from Azure, including connections from the subscriptions of other customers. When you're using this option, make sure your sign-in and user permissions limit access to only authorized users.
-## Programmatically managing firewall rules
-In addition to the Azure portal, firewall rules can be managed programmatically using Azure CLI.
+## Programmatically manage firewall rules
+In addition to using the Azure portal, you can manage firewall rules programmatically by using the Azure CLI.
-## Troubleshooting firewall issues
-Consider the following points when access to the Microsoft Azure Database for PostgreSQL Server service does not behave as you expect:
+From the Azure CLI, a firewall rule setting with a starting and ending address equal to 0.0.0.0 does the equivalent of the **Allow public access from any Azure service within Azure to this server** option in the portal. If firewall rules reject the connection attempt, the app won't reach the Azure Database for PostgreSQL server.
-* **Changes to the allow list have not taken effect yet:** There may be as much as a five-minute delay for changes to the Azure Database for PostgreSQL Server firewall configuration to take effect.
+## Troubleshoot firewall problems
+Consider the following possibilities when access to an Azure Database for PostgreSQL server doesn't behave as you expect:
-* **The login is not authorized or an incorrect password was used:** If a login does not have permissions on the Azure Database for PostgreSQL server or the password used is incorrect, the connection to the Azure Database for PostgreSQL server is denied. Creating a firewall setting only provides clients with an opportunity to attempt connecting to your server; each client must still provide the necessary security credentials.
+* **Changes to the allowlist haven't taken effect yet**: Changes to the firewall configuration of an Azure Database for PostgreSQL server might take up to five minutes.
- For example, using a JDBC client, the following error may appear.
- > java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "yourusername"
+* **The sign-in isn't authorized, or an incorrect password was used**: If a sign-in doesn't have permissions on the Azure Database for PostgreSQL server or the password is incorrect, the connection to the server is denied. Creating a firewall setting only provides clients with an opportunity to try connecting to your server. Each client must still provide the necessary security credentials.
-* **Dynamic IP address:** If you have an Internet connection with dynamic IP addressing and you are having trouble getting through the firewall, you could try one of the following solutions:
+ For example, the following error might appear if authentication fails for a JDBC client:
- * Ask your Internet Service Provider (ISP) for the IP address range assigned to your client computers that access the Azure Database for PostgreSQL Server, and then add the IP address range as a firewall rule.
+ > java.util.concurrent.ExecutionException: java.lang.RuntimeException: org.postgresql.util.PSQLException: FATAL: password authentication failed for user "yourusername"
- * Get static IP addressing instead for your client computers, and then add the static IP address as a firewall rule.
+* **The firewall isn't allowing dynamic IP addresses**: If you have an internet connection with dynamic IP addressing and you're having trouble getting through the firewall, try one of the following solutions:
-
+ * Ask your internet service provider (ISP) for the IP address range assigned to your client computers that access the Azure Database for PostgreSQL server. Then add the IP address range as a firewall rule.
+ * Get static IP addresses instead for your client computers, and then add the static IP addresses as a firewall rule.
-* **Firewall rule is not available for IPv6 format:** The firewall rules must be in IPv4 format. If you specify firewall rules in IPv6 format, it will show the validation error.
+* **Firewall rules aren't available for IPv6 format**: The firewall rules must be in IPv4 format. If you specify firewall rules in IPv6 format, you'll get a validation error.
## Next steps
-* [Create and manage Azure Database for PostgreSQL firewall rules using the Azure portal](how-to-manage-firewall-portal.md)
-* [Create and manage Azure Database for PostgreSQL firewall rules using Azure CLI](how-to-manage-firewall-cli.md)
+* [Create and manage Azure Database for PostgreSQL firewall rules by using the Azure portal](how-to-manage-firewall-portal.md)
+* [Create and manage Azure Database for PostgreSQL firewall rules by using the Azure CLI](how-to-manage-firewall-cli.md)
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/concepts-security.md
Title: 'Security in Azure Database for PostgreSQL - Flexible Server'
-description: Learn about security in the Flexible Server deployment option for Azure Database for PostgreSQL - Flexible Server
+description: Learn about security in the Flexible Server deployment option for Azure Database for PostgreSQL.
Last updated 07/26/2021
# Security in Azure Database for PostgreSQL - Flexible Server
-There are multiple layers of security that are available to protect the data on your Azure Database for PostgreSQL server. This article outlines those security options.
+Multiple layers of security are available to help protect the data on your Azure Database for PostgreSQL server. This article outlines those security options.
## Information protection and encryption
-### In-transit
- Azure Database for PostgreSQL secures your data by encrypting data in-transit with Transport Layer Security. Encryption (SSL/TLS) is enforced by default.
+Azure Database for PostgreSQL encrypts data in two ways:
-### At-rest
-The Azure Database for PostgreSQL service uses the FIPS 140-2 validated cryptographic module for storage encryption of data at-rest. Data, including backups, are encrypted on disk, including the temporary files created while running queries. The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. This is similar to other at-rest encryption technologies like Transparent Data Encryption (TDE) in SQL Server or Oracle databases. Storage encryption is always on and can't be disabled.
+- **Data in transit**: Azure Database for PostgreSQL encrypts in-transit data with Secure Sockets Layer and Transport Layer Security (SSL/TLS). Encryption is enforced by default.
+- **Data at rest**: For storage encryption, Azure Database for PostgreSQL uses the FIPS 140-2 validated cryptographic module. Data is encrypted on disk, including backups and the temporary files created while queries are running.
+ The service uses the AES 256-bit cipher included in Azure storage encryption, and the keys are system managed. This is similar to other at-rest encryption technologies, like transparent data encryption in SQL Server or Oracle databases. Storage encryption is always on and can't be disabled.
-## Network security
-
-You can choose two main networking options when running your Azure Database for PostgreSQL ΓÇô Flexible Server . The options are private access (VNet integration) and public access (allowed IP addresses). By utilizing private access, your flexible server is deployed into your Azure Virtual Network. Azure virtual networks provide private and secure network communication. Resources in a virtual network can communicate through private IP addresses.
-With public access flexible server is accessed through a public endpoint. The public endpoint is a publicly resolvable DNS address, access to which can is secured through firewall that by default blocks all connections.
+## Network security
+When you're running Azure Database for PostgreSQL - Flexible Server, you have two main networking options:
-### IP firewall rules
-IP firewall rules grant access to servers based on the originating IP address of each request. See the [firewall rules overview](concepts-firewall-rules.md) for more information.
+- **Private access**: You can deploy your server into an Azure virtual network. Azure virtual networks help provide private and secure network communication. Resources in a virtual network can communicate through private IP addresses. For more information, see the [networking overview for Azure Database for PostgreSQL - Flexible Server](concepts-networking.md).
+ Security rules in network security groups enable you to filter the type of network traffic that can flow in and out of virtual network subnets and network interfaces. For more information, see the [overview of network security groups](../../virtual-network/network-security-groups-overview.md).
-### Private VNET Access
-You can deploy your flexible server into your Azure Virtual Network. Azure virtual networks provide private and secure network communication. For more information,see the [flexible server](concepts-networking.md)
+- **Public access**: The server can be accessed through a public endpoint. The public endpoint is a publicly resolvable DNS address. Access to it is secured through a firewall that blocks all connections by default.
-### Network security groups (NSG)
-Security rules in network security groups enable you to filter the type of network traffic that can flow in and out of virtual network subnets and network interfaces. For more information, see [Network Security Groups Overview](../../virtual-network/network-security-groups-overview.md)
+ IP firewall rules grant access to servers based on the originating IP address of each request. For more information, see the [overview of firewall rules](concepts-firewall-rules.md).
## Access management
-While creating the Azure Database for PostgreSQL server, you provide credentials for an administrator role. This administrator role can be used to create additional [PostgreSQL roles](https://www.postgresql.org/docs/current/user-manag.html).
-
-You can also connect to the server using [Azure Active Directory (AAD) authentication](../concepts-aad-authentication.md).
--
-### Azure Defender protection
-
- Azure Database for PostgreSQL -Flexible currently doesn't support [Azure Defender Protection](../../security-center/azure-defender.md).
-
+While you're creating the Azure Database for PostgreSQL server, you provide credentials for an administrator role. This administrator role can be used to create more [PostgreSQL roles](https://www.postgresql.org/docs/current/user-manag.html).
-[Audit logging](../concepts-audit.md) is available to track activity in your databases.
+You can also connect to the server by using [Azure Active Directory authentication](../concepts-aad-authentication.md). [Audit logging](../concepts-audit.md) is available to track activity in your databases.
+> [!NOTE]
+> Azure Database for PostgreSQL - Flexible Server currently doesn't support [Azure Defender protection](../../security-center/azure-defender.md).
## Next steps
- - Enable firewall rules for [IPs](concepts-firewall-rules.md) for public access networking
- - Learn about [private access networking with Azure Database for PostgreSQL - Flexible Server](concepts-networking.md)
- - Learn about [Azure Active Directory authentication](../concepts-aad-authentication.md) in Azure Database for PostgreSQL
+- Enable [firewall rules for IP addresses](concepts-firewall-rules.md) for public access networking.
+- Learn about [private access networking with Azure Database for PostgreSQL - Flexible Server](concepts-networking.md).
+- Learn about [Azure Active Directory authentication](../concepts-aad-authentication.md) in Azure Database for PostgreSQL.
postgresql How To Upgrade Using Dump And Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/how-to-upgrade-using-dump-and-restore.md
Previously updated : 06/23/2021 Last updated : 08/25/2021 # Upgrade your PostgreSQL database using dump and restore
To step through this how-to-guide, you need:
- Create corresponding databases in the target database server. - You can skip upgrading `azure_maintenance` or template databases. - Refer to the tables above to determine the database is suitable for this mode of migration.-- If you want to use Azure Cloud Shell, please note that the session times out after 20 minutes. If your database size is < 10 GB, you may be able to complete the upgrade without the session timing out. Otherwise, you may have to keep the session open by other means, such as pressing <Enter> key once in 10-15 minutes.
+- If you want to use Azure Cloud Shell, please note that the session times out after 20 minutes. If your database size is < 10 GB, you may be able to complete the upgrade without the session timing out. Otherwise, you may have to keep the session open by other means, such as pressing any key once in 10-15 minutes.
## Example database used in this guide
In this guide, the following source and target servers and database names are us
| Target user name | pg@pg-11 | >[!NOTE]
-> Flexible server supports PostgreSQL version 11 onwards. Also, flexible server user name does not require @<servername>.
+> Flexible server supports PostgreSQL version 11 onwards. Also, flexible server user name does not require @dbservername.
## Upgrade your databases using offline migration methods You may choose to use one of the methods described in this section for your upgrades. You can use the following tips while performing the tasks.
You may choose to use one of the methods described in this section for your upgr
- In the Windows command line, run the command `SET PGSSLMODE=require` before running the pg_restore command. In Linux or Bash run the command `export PGSSLMODE=require` before running the pg_restore command. >[!Important]
-> It is recommended to test and validate the commands in a test environment before you use them in production.
+> The steps and methods provided in this document are to give some examples of pg_dump/pg_restore commands and do not represent all possible ways to perform upgrades. It is recommended to test and validate the commands in a test environment before you use them in production.
-### Method 1: Migrate using dump file
+### Method 1: Using pg_dump and psql
-This method involves two steps. First is to create a dump from the source server. The second step is to restore the dump file to the target server. More details, please see the [Migrate using dump and restore](howto-migrate-using-dump-and-restore.md) documentation. This is the recommended method if you have large databases and your client system has enough storage to store the dump file.
+This method involves two steps. First is to dump a SQL file from the source server using `pg_dump`. The second step is to import the file to the target server using `psql`. Please see the [Migrate using export and import](howto-migrate-using-export-and-import.md) documentation for details.
-### Method 2: Migrate using streaming the dump data to the target database
+### Method 2: Using pg_dump and pg_restore
+
+In this method of upgrade, you first create a dump from the source server using `pg_dump`. Then you restore that dump file to the target server using `pg_restore`. Please see the [Migrate using dump and restore](howto-migrate-using-dump-and-restore.md) documentation for details.
+
+### Method 3: Using streaming the dump data to the target database
If you do not have a PostgreSQL client or you want to use Azure Cloud Shell, then you can use this method. The database dump is streamed directly to the target database server and does not store the dump in the client. Hence, this can be used with a client with limited storage and even can be run from the Azure Cloud Shell.
If you do not have a PostgreSQL client or you want to use Azure Cloud Shell, the
2. Run the dump and restore as a single command line using a pipe. ```azurecli-interactive
- pg_dump -Fc -v --mySourceServer --port=5432 --username=myUser --dbname=mySourceDB | pg_restore -v --no-owner --host=myTargetServer --port=5432 --username=myUser --dbname=myTargetDB
+ pg_dump -Fc --host=mySourceServer --port=5432 --username=myUser --dbname=mySourceDB | pg_restore --no-owner --host=myTargetServer --port=5432 --username=myUser --dbname=myTargetDB
``` For example, ```azurecli-interactive
- pg_dump -Fc -v --host=pg-95.postgres.database.azure.com --port=5432 --username=pg@pg-95 --dbname=bench5gb | pg_restore -v --no-owner --host=pg-11.postgres.database.azure.com --port=5432 --username=pg@pg-11 --dbname=bench5gb
+ pg_dump -Fc --host=pg-95.postgres.database.azure.com --port=5432 --username=pg@pg-95 --dbname=bench5gb | pg_restore --no-owner --host=pg-11.postgres.database.azure.com --port=5432 --username=pg@pg-11 --dbname=bench5gb
``` 3. Once the upgrade (migration) process completes, you can test your application with the target server. 4. Repeat this process for all the databases within the server.
If you do not have a PostgreSQL client or you want to use Azure Cloud Shell, the
| 50 GB | 1-1.5 hours | | 100 GB | 2.5-3 hours|
-### Method 3: Migrate using parallel dump and restore
+### Method 4: Using parallel dump and restore
You can consider this method if you have few larger tables in your database and you want to parallelize the dump and restore process for that database. You also need enough storage in your client system to accommodate backup dumps. This parallel dump and restore process reduces the time consumption to complete the whole migration. For example, the 50 GB pgbench database which took 1-1.5 hrs to migrate was completed using Method 1 and 2 took less than 30 minutes using this method.
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/private-link/private-endpoint-overview.md
The following table includes a list of known limitations when using private endp
|Limitation |Description |Mitigation | ||||
-|Network Security Group (NSG) rules and User-Defined Routes don't apply to Private Endpoint | NSG isn't supported on private endpoints. While subnets containing the private endpoint can have NSG associated with it, the rules won't be effective on traffic processed by the private endpoint. You must have [network policies enforcement disabled](disable-private-endpoint-network-policy.md) to deploy private endpoints in a subnet. NSG is still enforced on other workloads hosted on the same subnet. Routes on any client subnet will be using an /32 prefix, changing the default routing behavior requires a similar UDR | Control the traffic by using NSG rules for outbound traffic on source clients. Deploy individual routes with /32 prefix to override private endpoint routes. NSG Flow logs and monitoring information for outbound connections are still supported and can be used |
+|Network Security Group (NSG) rules and User-Defined Routes don't apply to Private Endpoint | NSG isn't supported on private endpoints. While subnets containing the private endpoint can have NSG associated with it, the rules won't be effective on traffic processed by the private endpoint. You must have [network policies enforcement disabled](disable-private-endpoint-network-policy.md) to deploy private endpoints in a subnet. NSG is still enforced on other workloads hosted on the same subnet. Routes on any client subnet will be using an /32 prefix, changing the default routing behavior requires a similar UDR. NSG flow logs will not log any traffic sent to private endpoints. | Control the traffic by using NSG rules for outbound traffic on source clients. Deploy individual routes with /32 prefix to override private endpoint routes. NSG Flow logs and monitoring information for outbound connections are still supported and can be used |
## Next steps
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-permissions.md
Azure Purview uses **Collections** to organize and manage access across its sour
> [!NOTE] > At this time, this information only applies for Purview accounts created **on or after August 18, 2021**. Instances created before August 18 are able to create collections, but do not manage permissions through those collections. For information on access control for a Purview instance created before August 18, see our [**legacy permission guide**](#legacy-permission-guide) at the bottom of the page. >
-> All legacy accounts will be upgraded automatically in the coming weeks. You will receive an email notification when your Purview account is upgraded. When the account is upgraded, all assigned permissions will be automatically redeployed to the root collection.
+> All legacy accounts will be upgraded automatically in the coming weeks. You will receive an email notification when your Purview account is upgraded. When the account is upgraded, all assigned permissions will be automatically redeployed to the root collection. At that time, permissions should be managed through collections, not Access Control (IAM). IAM permissions will no longer apply to Purview artifacts.
## Collections
purview Catalog Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/catalog-private-link.md
For scenarios where _ingestion_ private endpoint is used in your Azure Purview a
|Azure SQL Managed Instance | Self-Hosted IR| SQL Authentication| |Azure Cosmos DB| Self-Hosted IR| Account Key| |SQL Server | Self-Hosted IR| SQL Authentication|
+|Azure Synapse Analytics | Self-Hosted IR| Service Principal|
+|Azure Synapse Analytics | Self-Hosted IR| SQL Authentication|
## Frequently Asked Questions
To view list of current limitations related to Azure Purview private endpoints,
## Next steps -- [Deploy ingestion private endpoints](./catalog-private-link-ingestion.md)
+- [Deploy ingestion private endpoints](./catalog-private-link-ingestion.md)
purview How To Create And Manage Collections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/how-to-create-and-manage-collections.md
Once you restrict inheritance, you will need to add users directly to the restri
1. Select the **Restrict inherited permissions** toggle button again to revert. :::image type="content" source="./media/how-to-create-and-manage-collections/remove-restriction.png" alt-text="Screenshot of Purview studio collection window, with the role assignments tab selected, and the unrestrict inherited permissions slide button highlighted." border="true":::
+
+## Register source to a collection
+
+1. Select **Register** or register icon on collection node to register a data source. Note that only data source admin can register sources.
+
+ :::image type="content" source="./media/how-to-create-and-manage-collections/register-by-collection.png" alt-text="Screenshot of the data map Purview studio window with the register button highlighted both at the top of the page and under a collection."border="true":::
+
+1. Fill in the data source name, and other source information. It lists all the collections which you have scan permission on the bottom of the form. You can select one collection. All assets under this source will belong to the collection you select.
+
+ :::image type="content" source="./media/how-to-create-and-manage-collections/register-source.png" alt-text="Screenshot of the source registration window."border="true":::
+
+1. The created data source will be put under the selected collection. Select **View details** to see the data source.
+
+ :::image type="content" source="./media/how-to-create-and-manage-collections/see-registered-source.png" alt-text="Screenshot of the data map Purview studio window with the newly added source card highlighted."border="true":::
+
+1. Select **New scan** to create scan under the data source.
+
+ :::image type="content" source="./media/how-to-create-and-manage-collections/new-scan.png" alt-text="Screenshot of a source Purview studio window with the new scan button highlighted."border="true":::
+
+1. Similarly, at the bottom of the form, you can select a collection, and all assets scanned will be included in the collection.
+Note that the collections listed here are restricted to subcollections of the data source collection.
+
+ :::image type="content" source="./media/how-to-create-and-manage-collections/scan-under-collection.png" alt-text="Screenshot of a new scan window with the collection dropdown highlighted."border="true":::
+
+1. Back in the collection window, you will see the data sources linked to the collection on the sources card.
+
+ :::image type="content" source="./media/how-to-create-and-manage-collections/source-under-collection.png" alt-text="Screenshot of the data map Purview studio window with the newly added source card highlighted in the map."border="true":::
## Add assets to collections
-Assets and sources are also associated with collections. During a scan, if the scan was associated with a collection the assets will be automatically added to that collection, but can also be manually added.
+Assets and sources are also associated with collections. During a scan, if the scan was associated with a collection the assets will be automatically added to that collection, but can also be manually added to any subcollections.
1. Check the collection information in asset details. You can find collection information in the **Collection path** section on right-top corner of the asset details page.
Assets and sources are also associated with collections. During a scan, if the s
:::image type="content" source="./media/how-to-create-and-manage-collections/move-asset.png" alt-text="Screenshot of Purview studio asset window with the collection path highlighted and the ellipsis button next to collection path selected." border="true"::: 1. Select the **Move to another collection** button.
-1. In the right side panel, choose the target collection you want move to. Note that you can only see the collections where you have write permissions.
+1. In the right side panel, choose the target collection you want move to. Note that you can only see the collections where you have write permissions. The asset can also only be added to the subcollections of the data source collection.
:::image type="content" source="./media/how-to-create-and-manage-collections/move-select-collection.png" alt-text="Screenshot of Purview studio pop-up window with the select a collection dropdown menu highlighted." border="true":::
Assets and sources are also associated with collections. During a scan, if the s
:::image type="content" source="./media/how-to-create-and-manage-collections/view-asset-details.png" alt-text="Screenshot of the catalog Purview studio window with the by collection tab selected and asset check boxes highlighted."border="true":::
-## Register source to a collection
-
-1. Select **Register** or register icon on collection node to register a data source. Note that only data source admin can register sources.
-
- :::image type="content" source="./media/how-to-create-and-manage-collections/register-by-collection.png" alt-text="Screenshot of the data map Purview studio window with the register button highlighted both at the top of the page and under a collection."border="true":::
-
-1. Fill in the data source name, and other source information. It lists all the collections which you have scan permission on the bottom of the form. You can select one collection. All assets under this source will belong to the collection you select.
-
- :::image type="content" source="./media/how-to-create-and-manage-collections/register-source.png" alt-text="Screenshot of the source registration window."border="true":::
-
-1. The created data source will be put under the selected collection. Select **View details** to see the data source.
-
- :::image type="content" source="./media/how-to-create-and-manage-collections/see-registered-source.png" alt-text="Screenshot of the data map Purview studio window with the newly added source card highlighted."border="true":::
-
-1. Select **New scan** to create scan under the data source.
-
- :::image type="content" source="./media/how-to-create-and-manage-collections/new-scan.png" alt-text="Screenshot of a source Purview studio window with the new scan button highlighted."border="true":::
-
-1. Similarly, at the bottom of the form, you can select a collection, and all assets scanned will be included in the collection.
-Note that the collections listed here are restricted to subcollections of the data source collection.
-
- :::image type="content" source="./media/how-to-create-and-manage-collections/scan-under-collection.png" alt-text="Screenshot of a new scan window with the collection dropdown highlighted."border="true":::
-
-1. Back in the collection window, you will see the data sources linked to the collection on the sources card.
-
- :::image type="content" source="./media/how-to-create-and-manage-collections/source-under-collection.png" alt-text="Screenshot of the data map Purview studio window with the newly added source card highlighted in the map."border="true":::
- ## Legacy collection guide > [!NOTE]
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/purview/manage-integration-runtimes.md
Here are the domains and ports that will need to be allowed through corporate an
| `*.frontend.clouddatahub.net` | 443 | Global infrastructure Purview uses to run its scans. Wildcard required as there is no dedicated resource. | | `<managed Purview storage account>.core.windows.net` | 443 | Used by the self-hosted integration runtime to connect to the managed Azure storage account.| | `<managed Purview storage account>.queue.core.windows.net` | 443 | Queues used by purview to run the scan process. |
-| `<your Key Vault Name>.vault.azure.net` | 443 | Required if any credentials are stored in Azure Key Vault. |
| `download.microsoft.com` | 443 | Optional for SHIR updates. |+
+Based on your sources, you may also need to allow the domains of other Azure or external sources. A few examples are provided below, as well as the Azure Key Vault domain, if you are connecting to any credentials stored in the Key Vault.
+
+| Domain names | Outbound ports | Description |
+| -- | -- | - |
+| `<storage account>.core.windows.net` | 443 | Optional, to connect to an Azure Storage account. |
+| `*.database.windows.net` | 1433 | Optional, to connect to Azure SQL Database or Azure Synapse Analytics. |
+| `*.azuredatalakestore.net`<br>`login.microsoftonline.com/<tenant>/oauth2/token` | 443 | Optional, to connect to Azure Data Lake Store Gen 1. |
+| `<datastoragename>.dfs.core.windows.net` | 443 | Optional, to connect to Azure Data Lake Store Gen 2. |
+| `<your Key Vault Name>.vault.azure.net` | 443 | Required if any credentials are stored in Azure Key Vault. |
| Various Domains | Dependant | Domains for any other sources the SHIR will connect to. |
-
> [!IMPORTANT] > In most environments, you will also need to confirm that your DNS is correctly configured. To confirm you can use **nslookup** from your SHIR machine to check connectivity to each of the above domains. Each nslookup should return the the IP of the resource. If you are using [Private Endpoints](catalog-private-link.md), the private IP should be returned and not the Public IP. If no IP is returned, or if when using Private Endpoints the public IP is returned, you will need to address your DNS/VNET association, or your Private Endpoint/VNET peering.
You can delete a self-hosted integration runtime by navigating to **Integration
- [How scans detect deleted assets](concept-scans-and-ingestion.md#how-scans-detect-deleted-assets) -- [Use private endpoints with Purview](catalog-private-link.md)
+- [Use private endpoints with Purview](catalog-private-link.md)
search Search Query Odata Filter https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-query-odata-filter.md
Find hotels where the terms "hotel" and "airport" are no more than five words ap
$filter=search.ismatch('"hotel airport"~5', 'Description', 'full', 'any') and not Rooms/any(room: room/SmokingAllowed) ```
+Find documents that have a word that starts with the letters "lux" in the Description field. This query uses [prefix search](query-simple-syntax.md#prefix-queries) in combination with `search.ismatch`.
+
+```odata-filter-expr
+ $filter=search.ismatch('lux*', 'Description')
+```
+ ## Next steps - [Filters in Azure Cognitive Search](search-filters.md)
search Search Query Odata Full Text Search Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-query-odata-full-text-search-functions.md
Find documents where the terms "hotel" and "airport" are within 5 words from eac
search.ismatch('"hotel airport"~5', 'Description', 'full', 'any') and Rooms/any(room: not room/SmokingAllowed) ```
+Find documents that have a word that starts with the letters "lux" in the Description field. This query uses [prefix search](query-simple-syntax.md#prefix-queries) in combination with `search.ismatch`.
+
+```odata-filter-expr
+ search.ismatch('lux*', 'Description')
+```
+ ## Next steps - [Filters in Azure Cognitive Search](search-filters.md)
security-center Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security-center/deploy-vulnerability-assessment-vm.md
The vulnerability scanner extension works as follows:
The scanner extension will be installed on all of the selected machines within a few minutes.
- Scanning begins automatically as soon as the extension is successfully deployed. Scans will then run at four-hour intervals. This interval isn't configurable.
+ Scanning begins automatically as soon as the extension is successfully deployed. Scans will then run every 12 hours. This interval isn't configurable.
>[!IMPORTANT] > If the deployment fails on one or more machines, ensure the target machines can communicate with Qualys' cloud service by adding the following IPs to your allow lists (via port 443 - the default for HTTPS):
security Ddos Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/ddos-best-practices.md
documentation.
> [!NOTE]
-> Azure App Service Environment for PowerApps or API management in a virtual network with a public IP are both not natively supported.
+> Azure App Service Environment for Power Apps or API management in a virtual network with a public IP are both not natively supported.
## Next steps
security Recover From Identity Compromise https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/recover-from-identity-compromise.md
Review administrative rights in both your cloud and on-premises environments. Fo
|**All cloud environments** | - Review any privileged access rights in the cloud and remove any unnecessary permissions<br> - Implement Privileged Identity Management (PIM)<br> - Set up Conditional Access policies to limit administrative access during hardening | |**All on-premises environments** | - Review privileged access on-premise and remove unnecessary permissions<br> - Reduce membership of built-in groups<br> - Verify Active Directory delegations<br> - Harden your Tier 0 environment, and limit who has access to Tier 0 assets | |**All Enterprise applications** | Review for delegated permissions and consent grants that allow any of the following actions: <br><br> - Modifying privileged users and roles <br>- Reading or accessing all mailboxes <br>- Sending or forwarding email on behalf of other users <br>- Accessing all OneDrive or SharePoint site content <br>- Adding service principals that can read/write to the directory |
-|**Microsoft 365 environments** |Review access and configuration settings for your Microsoft 365 environment, including: <br>- SharePoint Online Sharing <br>- Microsoft Teams <br>- PowerApps <br>- Microsoft OneDrive for Business |
+|**Microsoft 365 environments** |Review access and configuration settings for your Microsoft 365 environment, including: <br>- SharePoint Online Sharing <br>- Microsoft Teams <br>- Power Apps <br>- Microsoft OneDrive for Business |
| **Review user accounts in your environments** |- Review and remove guest user accounts that are no longer needed. <br>- Review email configurations for delegates, mailbox folder permissions, ActiveSync mobile device registrations, Inbox rules, and Outlook on the Web options. <br>- Review ApplicationImpersonation rights and reduce any use of legacy authentication as much as possible. <br>- Validate that MFA is enforced and that both MFA and self-service password reset (SSPR) contact information for all users is correct. | | | |
service-bus-messaging Service Bus Queues Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/service-bus-queues-topics-subscriptions.md
Learn more about the [JMS 2.0 entities](java-message-service-20-entities.md) and
## Next steps
-For more information and examples of using Service Bus messaging, see the following advanced topics:
-
-* [Service Bus messaging overview](service-bus-messaging-overview.md)
-* [Quickstart: Send and receive messages using the Azure portal and .NET](service-bus-quickstart-portal.md)
-* [Tutorial: Update inventory using Azure portal and topics/subscriptions](service-bus-tutorial-topics-subscriptions-portal.md)
+Try the samples in the language of your choice to explore Azure Service Bus features.
+- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/)
+- [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/)
+- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/)
+- [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/)
+- [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)
+Find samples for the older .NET and Java client libraries below:
+- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)
+- [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse)
service-bus-messaging Topic Filters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-bus-messaging/topic-filters.md
For examples, see [Service Bus filter examples](service-bus-filter-examples.md).
> Because the Azure portal now supports Service Bus Explorer functionality, subscription filters can be created or edited from the portal. ## Next steps
-See the following samples:
+Try the samples in the language of your choice to explore Azure Service Bus features.
-- [.NET - Basic send and receive tutorial with filters](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/GettingStarted/BasicSendReceiveTutorialwithFilters/BasicSendReceiveTutorialWithFilters)-- [.NET - Topic filters](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/TopicFilters)-- [Azure Resource Manager template](/azure/templates/microsoft.servicebus/2017-04-01/namespaces/topics/subscriptions/rules)
+- [Azure Service Bus client library samples for .NET (latest)](/samples/azure/azure-sdk-for-net/azuremessagingservicebus-samples/)
+- [Azure Service Bus client library samples for Java (latest)](/samples/azure/azure-sdk-for-java/servicebus-samples/)
+- [Azure Service Bus client library samples for Python](/samples/azure/azure-sdk-for-python/servicebus-samples/)
+- [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/)
+- [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/)
+
+Find samples for the older .NET and Java client libraries below:
+- [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/)
+- [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus/MessageBrowse)
service-fabric How To Grant Access Other Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-grant-access-other-resources.md
And for system-assigned managed identities:
For more details, please see [Vaults - Update Access Policy](/rest/api/keyvault/vaults/updateaccesspolicy). ## Next steps
+* [Deploy a Service Fabric application with Managed Identity to a managed cluster](how-to-managed-cluster-application-managed-identity.md)
* [Deploy an Azure Service Fabric application with a system-assigned managed identity](./how-to-deploy-service-fabric-application-system-assigned-managed-identity.md) * [Deploy an Azure Service Fabric application with a user-assigned managed identity](./how-to-deploy-service-fabric-application-user-assigned-managed-identity.md)
service-fabric How To Managed Cluster App Deployment Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-app-deployment-template.md
Title: Deploy a Service Fabric managed cluster application using ARM template
-description: Deploy an application to a Azure Service Fabric managed cluster using an Azure Resource Manager template.
+ Title: Deploy an application to a managed cluster using Azure Resource Manager
+description: Learn how to deploy, upgrade, or delete a Service Fabric application on an Azure Service Fabric managed cluster using Azure Resource Manager
Previously updated : 5/10/2021 Last updated : 8/23/2021
-# Deploy a Service Fabric managed cluster application using ARM template
+# Manage application lifecycle on a managed cluster using Azure Resource Manager
You have multiple options for deploying Azure Service Fabric applications on your Service Fabric managed cluster. We recommend using Azure Resource Manager. If you use Resource Manager, you can describe applications and services in JSON, and then deploy them in the same Resource Manager template as your cluster. Unlike using PowerShell or Azure CLI to deploy and manage applications, if you use Resource Manager, you don't have to wait for the cluster to be ready; application registration, provisioning, and deployment can all happen in one step. Using Resource Manager is the best way to manage the application life cycle in your cluster. For more information, see [Best practices: Infrastructure as code](service-fabric-best-practices-infrastructure-as-code.md#service-fabric-resources).
In this document, you will learn how to:
> [!div class="checklist"] >
-> * Deploy application resources by using Resource Manager.
-> * Upgrade application resources by using Resource Manager.
-> * Delete application resources.
+> * Deploy service fabric application resources by using Resource Manager.
+> * Upgrade service fabric application resources by using Resource Manager.
+> * Delete service fabric application resources.
-## Deploy application resources
+## Deploy Service Fabric application resources
The high-level steps you take to deploy an application and its services by using the Resource Manager application resource model are: 1. Package the application code.
The high-level steps you take to deploy an application and its services by using
For more information, view [Package an application](service-fabric-package-apps.md#create-an-sfpkg).
-Then, you create a Resource Manager template, update the parameters file with application details, and deploy the template on the Service Fabric cluster. [Explore samples](https://github.com/Azure-Samples/service-fabric-dotnet-quickstart/tree/voting-sample-no-reverse-proxy/ARM-Managed-Cluster).
+Then, you create a Resource Manager template, update the parameters file with application details, and deploy the template on the Service Fabric managed cluster. [Explore samples](https://github.com/Azure-Samples/service-fabric-dotnet-quickstart/tree/voting-sample-no-reverse-proxy/ARM-Managed-Cluster).
### Create a storage account
The sample application contains [Azure Resource Manager templates](https://githu
} ```
-### Deploy the application
+### Deploy the Service Fabric application
Run the **New-AzResourceGroupDeployment** cmdlet to deploy the application to the resource group that contains your cluster:
New-AzResourceGroupDeployment -ResourceGroupName "sf-cluster-rg" -TemplateParame
## Upgrade the Service Fabric application by using Resource Manager > [!IMPORTANT]
-> Any service being deployed via ARM JSON definition must be removed from the DefaultServices section of the corresponding ApplicationManifest.xml file.
+> Any service being deployed via Azure Resource Manager (ARM) template must be removed from the DefaultServices section of the corresponding ApplicationManifest.xml file.
You might upgrade an application that's already deployed to a Service Fabric cluster for one of these reasons:
You might upgrade an application that's already deployed to a Service Fabric clu
"value": "1.0.1" }, ```
-## Delete application resources
+## Delete Service Fabric application resources
+> [!NOTE]
+> Applications should not be deleted via Azure Resource Manager (ARM) template as there is no declarative way to cleanup individual resources
-To delete an application that was deployed by using the application resource model in Resource
+To delete a service fabric application that was deployed by using the application resource model in Resource
1. Use the [Get-AzResource](/powershell/module/az.resources/get-azresource) cmdlet to get the resource ID for the application:
service-fabric How To Managed Cluster Application Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-application-managed-identity.md
Title: Configure and use application managed identity on Service Fabric managed cluster nodes
-description: Learn how to configure, and use an application managed identity on an ARM template deployed Azure Service Fabric managed cluster.
+ Title: Configure and use applications with managed identity on a Service Fabric managed cluster
+description: Learn how to configure, and use an application with managed identity on an Azure Resource Manager (ARM) template deployed Azure Service Fabric managed cluster.
Previously updated : 5/10/2021 Last updated : 8/23/2021
-# Deploy a Service Fabric application with Managed Identity
+# Deploy an application with Managed Identity to a Service Fabric managed cluster
-To deploy a Service Fabric application with managed identity, the application needs to be deployed through Azure Resource Manager, typically with an Azure Resource Manager template. For more information on how to deploy Service Fabric application through Azure Resource Manager, see [Manage applications and services as Azure Resource Manager resources](service-fabric-application-arm-resource.md).
+To deploy a Service Fabric application with managed identity, the application needs to be deployed through Azure Resource Manager, typically with an Azure Resource Manager template. For more information on how to deploy Service Fabric application through Azure Resource Manager, see [Deploy an application to a managed cluster using Azure Resource Manager](how-to-managed-cluster-app-deployment-template.md).
> [!NOTE] >
This property declares (to Azure Resource Manager, and the Managed Identity and
This is the equivalent mapping of an identity to a service as described above, but from the perspective of the service definition. The identity is referenced here by its friendly name (`WebAdmin`), as declared in the application manifest. ## Next steps-
-* [Leverage the managed identity of a Service Fabric application from service code](./how-to-managed-identity-service-fabric-app-code.md)
-* [Grant an Azure Service Fabric application access to other Azure resources](./how-to-grant-access-other-resources.md)
+* [Leverage the managed identity of a Service Fabric application from service code](how-to-managed-identity-service-fabric-app-code.md)
+* [Grant an Azure Service Fabric application access to other Azure resources](how-to-grant-access-other-resources.md)
service-fabric How To Managed Cluster Application Secrets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-application-secrets.md
Title: Use application secrets in Service Fabric managed clusters
-description: Learn about Azure Service Fabric application secrets and how to gather info required for use in managed clusters
+ Title: Deploy application secrets to a Service Fabric managed cluster
+description: Learn about Azure Service Fabric application secrets and how to deploy them to a managed cluster
Previously updated : 5/10/2021 Last updated : 8/23/2021
-# Use application secrets in Service Fabric managed clusters
+# Deploy application secrets to a Service Fabric managed cluster
-Secrets can be any sensitive information, such as storage connection strings, passwords, or other values that should not be handled in plain text. This article uses Azure Key Vault to manage keys and secrets as it's for Service Fabric managed clusters. However, *using* secrets in an application is cloud platform-agnostic to allow applications to be deployed to a cluster hosted anywhere.
+Secrets can be any sensitive information, such as storage connection strings, passwords, or other values that should not be handled in plain text. We recommend using Azure Key Vault to manage keys and secrets for Service Fabric managed clusters and leverage it for this article. However, *using* secrets in an application is cloud platform-agnostic to allow applications to be deployed to a cluster hosted anywhere.
The recommended way to manage service configuration settings is through [service configuration packages][config-package]. Configuration packages are versioned and updatable through managed rolling upgrades with health-validation and auto rollback. This is preferred to global configuration as it reduces the chances of a global service outage. Encrypted secrets are no exception. Service Fabric has built-in features for encrypting and decrypting values in a configuration package Settings.xml file using certificate encryption.
This certificate must be installed on each node in the cluster and Service Fabri
For managed clusters you'll need three values, two from Azure Key Vault, and one you decide on for the local store name on the nodes. Parameters:
-* Source Vault: This is the
+* `Source Vault`: This is the
* e.g.: /subscriptions/{subscriptionid}/resourceGroups/myrg1/providers/Microsoft.KeyVault/vaults/mykeyvault1
-* Certificate URL: This is the full object identifier and is case-insensitive and immutable
+* `Certificate URL`: This is the full object identifier and is case-insensitive and immutable
* https://mykeyvault1.vault.azure.net/secrets/{secretname}/{secret-version}
-* Certificate Store: This is the local certificate store on the nodes where the cert will be placed
+* `Certificate Store`: This is the local certificate store on the nodes where the cert will be placed
* certificate store name on the nodes, e.g.: "MY" Service Fabric managed clusters supports two methods for adding version-specific secrets to your nodes.
service-fabric How To Managed Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-configuration.md
Title: Configure your Service Fabric managed cluster description: Learn how to configure your Service Fabric managed cluster for automatic OS upgrades, NSG rules, and more. Previously updated : 5/10/2021 Last updated : 8/23/2021 # Service Fabric managed cluster configuration options
In addition to selecting the [Service Fabric managed cluster SKU](overview-manag
* Adding a [virtual machine scale set extension](how-to-managed-cluster-vmss-extension.md) to a node type * Configuring cluster [availability zone spanning](how-to-managed-cluster-availability-zones.md)
-* Configuring cluster [NSG rules and other networking options](how-to-managed-cluster-networking.md)
+* Configuring cluster [network settings](how-to-managed-cluster-networking.md)
+* Configure a node type for [large virtual machine scale sets](how-to-managed-cluster-large-virtual-machine-scale-sets.md)
* Configuring [managed identity](how-to-managed-identity-managed-cluster-virtual-machine-scale-sets.md) on cluster node types * Enabling [automatic OS upgrades](how-to-managed-cluster-configuration.md#enable-automatic-os-image-upgrades) for cluster nodes * Enabling [OS and data disk encryption](how-to-enable-managed-cluster-disk-encryption.md) on cluster nodes
service-fabric How To Managed Cluster Large Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-large-virtual-machine-scale-sets.md
+
+ Title: Configure a secondary node type for large virtual machine scale sets on a Service Fabric managed cluster
+description: This article walks through how to configure a secondary node type as a large virtual machine scale set
+ Last updated : 8/23/2021 ++
+# Service Fabric managed cluster node type scaling
+
+Each node type in a Service Fabric managed cluster is backed by a virtual machine scale set. To allow managed cluster node types to create [large virtual machine scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-placement-groups.md) a property `multiplePlacementGroups` has been added to node type definition. By default, managed cluster node types set this property to false to keep fault and upgrade domains consistent within a placement group, but this setting limits a node type from scaling beyond 100 VMs. To help decide whether your application can make effective use of large scale sets, see [this list of requirements](../virtual-machine-scale-sets/virtual-machine-scale-sets-placement-groups.md#checklist-for-using-large-scale-sets).
+
+Since the Azure Service Fabric managed cluster resource provider orchestrates scaling and uses managed disks for data, we are able to support large scale sets for both stateful and stateless secondary node types.
+
+> [!NOTE]
+> This property can not be modified after a node type is deployed.
+
+## Enable large virtual machine scale sets in a Service Fabric managed cluster
+To configure a secondary node type as a large scale set, set the **multiplePlacementGroups** property to **true**.
+> [!NOTE]
+> This property canΓÇÖt be set on the primary node type.
+
+* The Service Fabric managed cluster resource apiVersion should be **2021-05-01** or later.
+
+```json
+ {
+ "apiVersion": "[variables('sfApiVersion')]",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
+ "location": "[resourcegroup().location]",
+ "dependsOn": [
+ "[concat('Microsoft.ServiceFabric/managedclusters/', parameters('clusterName'))]"
+ ],
+ "properties": {
+ "multiplePlacementGroups": true,
+ "isPrimary": false,
+ "vmImagePublisher": "[parameters('vmImagePublisher')]",
+ "vmImageOffer": "[parameters('vmImageOffer')]",
+ "vmImageSku": "[parameters('vmImageSku')]",
+ "vmImageVersion": "[parameters('vmImageVersion')]",
+ "vmSize": "[parameters('nodeTypeSize')]",
+ "vmInstanceCount": "[parameters('nodeTypeVmInstanceCount')]",
+ "dataDiskSizeGB": "[parameters('nodeTypeDataDiskSizeGB')]"
+ }
+ }
+}
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Deploy an app to a Service Fabric managed cluster](./tutorial-managed-cluster-deploy-app.md)
service-fabric How To Managed Cluster Networking https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-networking.md
Title: Configure network settings for Service Fabric managed clusters
-description: Learn how to configure your Service Fabric managed cluster for NSG rules, RDP port access, load balancing rules, and more.
+description: Learn how to configure your Service Fabric managed cluster for NSG rules, RDP port access, load-balancing rules, and more.
Previously updated : 5/10/2021 Last updated : 8/23/2021 # Configure network settings for Service Fabric managed clusters
-Service Fabric managed clusters are created with a default networking configuration. This configuration consists of mandatory rules for essential cluster functionality, and a few optional rules such as allowing all outbound traffic by default, which are intended to make customer configuration easier.
+Service Fabric managed clusters are created with a default networking configuration. This configuration consists of an [Azure Load Balancer](../load-balancer/load-balancer-overview.md) with a public ip, a VNet with one subnet allocated, and a NSG configured for essential cluster functionality. There are also optional NSG rules applied such as allowing all outbound traffic by default that are intended to make customer configuration easier. This document walks through how to modify the following networking configuration options and more:
-Beyond the default networking configuration, you can modify the networking rules to meet the needs of your scenario.
+- [Manage NSG Rules](#nsgrules)
+- [Manage RDP access](#rdp)
+- [Manage Load Balancer config](#lbconfig)
+- [Enable IPv6](#ipv6)
+- [Bring your own virtual network](#byovnet)
+- [Bring your own load balancer](#byolb)
-## NSG rules guidance
+<a id="nsgrules"></a>
+## Manage NSG rules
+
+### NSG rules guidance
Be aware of these considerations when creating new NSG rules for your managed cluster.
Be aware of these considerations when creating new NSG rules for your managed cl
* Service Fabric managed clusters reserve the priority range 3001 to 4000 for creating optional NSG rules. These rules are added automatically to make configurations quick and easy. You can override these rules by adding custom NSG rules in priority range 1000 to 3000. * Custom NSG rules should have a priority within the range 1000 to 3000.
-## Apply NSG rules
-
-With classic (non-managed) Service Fabric clusters, you must declare and manage a separate *Microsoft.Network/networkSecurityGroups* resource in order to [apply Network Security Group (NSG) rules to your cluster](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.servicefabric/service-fabric-secure-nsg-cluster-65-node-3-nodetype). Service Fabric managed clusters enable you to assign NSG rules directly within the cluster resource of your deployment template.
+### Apply NSG rules
+Service Fabric managed clusters enable you to assign NSG rules directly within the cluster resource of your deployment template.
Use the [networkSecurityRules](/azure/templates/microsoft.servicefabric/managedclusters#managedclusterproperties-object) property of your *Microsoft.ServiceFabric/managedclusters* resource (version `2021-05-01` or later) to assign NSG rules. For example:
Use the [networkSecurityRules](/azure/templates/microsoft.servicefabric/managedc
} ```
-## RDP Ports
-
-Service Fabric managed clusters do not enable access to the RDP ports by default. You can open RDP ports to the internet by setting the following property on a Service Fabric managed cluster resource.
-
-```json
-"allowRDPAccess": true
-```
-
-When the allowRDPAccess property is set to true, the following NSG rule will be added to your cluster deployment.
-
-```json
-{
- "name": "SFMC_AllowRdpPort",
- "type": "Microsoft.Network/networkSecurityGroups/securityRules",
- "properties": {
- "description": "Optional rule to open RDP ports.",
- "protocol": "tcp",
- "sourcePortRange": "*",
- "sourceAddressPrefix": "*",
- "destinationAddressPrefix": "VirtualNetwork",
- "access": "Allow",
- "priority": 3002,
- "direction": "Inbound",
- "sourcePortRanges": [],
- "destinationPortRange": "3389"
- }
-}
-```
-
-Service Fabric managed clusters automatically creates inbound NAT rules for each instance in a node type.
-To find the port mappings to reach specific instances (cluster nodes) follow the steps below:
-
-Using Azure portal, locate the managed cluster created inbound NAT rules for Remote Desktop Protocol (RDP).
-
-1. Navigate to the managed cluster resource group within your subscription named with the following format: SFC_{cluster-id}
-
-2. Select the load balancer for the cluster with the following format: LB-{cluster-name}
-
-3. On the page for your load balancer, select Inbound NAT rules. Review the inbound NAT rules to confirm the inbound Frontend port to target port mapping for a node.
-
- The following screenshot shows the inbound NAT rules for three different node types:
-
- ![Inbound Nat Rules][Inbound-NAT-Rules]
-
- By default, for Windows clusters, the Frontend Port is in the 50000 and higher range and the target port is port 3389, which maps to the RDP service on the target node.
-
-4. Remotely connect to the specific node (scale set instance). You can use the user name and password that you set when you created the cluster or any other credentials you have configured.
-
-The following screenshot shows using Remote Desktop Connection to connect to the apps (Instance 0) node in a Windows cluster:
-
-![Remote Desktop Connection][sfmc-rdp-connect]
-
-## ClientConnection and HttpGatewayConnection ports
-
+## ClientConnection and HttpGatewayConnection default and optional rules
### NSG rule: SFMC_AllowServiceFabricGatewayToSFRP A default NSG rule is added to allow the Service Fabric resource provider to access the cluster's clientConnectionPort and httpGatewayConnectionPort. This rule allows access to the ports through the service tag 'ServiceFabric'.
This optional rule enables customers to access SFX, connect to the cluster using
} ```
-## Load balancer ports
+<a id="rdp"></a>
+## Enable access to RDP ports from internet
+
+Service Fabric managed clusters do not enable inbound access to the RDP ports from the internet by default. You can open inbound access to the RDP ports from the internet by setting the following property on a Service Fabric managed cluster resource.
+
+```json
+"allowRDPAccess": true
+```
+
+When the allowRDPAccess property is set to true, the following NSG rule will be added to your cluster deployment.
+
+```json
+{
+ "name": "SFMC_AllowRdpPort",
+ "type": "Microsoft.Network/networkSecurityGroups/securityRules",
+ "properties": {
+ "description": "Optional rule to open RDP ports.",
+ "protocol": "tcp",
+ "sourcePortRange": "*",
+ "sourceAddressPrefix": "*",
+ "destinationAddressPrefix": "VirtualNetwork",
+ "access": "Allow",
+ "priority": 3002,
+ "direction": "Inbound",
+ "sourcePortRanges": [],
+ "destinationPortRange": "3389"
+ }
+}
+```
+
+Service Fabric managed clusters automatically creates inbound NAT rules for each instance in a node type.
+To find the port mappings to reach specific instances (cluster nodes) follow the steps below:
+
+Using Azure portal, locate the managed cluster created inbound NAT rules for Remote Desktop Protocol (RDP).
+
+1. Navigate to the managed cluster resource group within your subscription named with the following format: SFC_{cluster-id}
+
+2. Select the load balancer for the cluster with the following format: LB-{cluster-name}
+
+3. On the page for your load balancer, select Inbound NAT rules. Review the inbound NAT rules to confirm the inbound Frontend port to target port mapping for a node.
+
+ The following screenshot shows the inbound NAT rules for three different node types:
+
+ ![Inbound Nat Rules][Inbound-NAT-Rules]
+
+ By default, for Windows clusters, the Frontend Port is in the 50000 and higher range and the target port is port 3389, which maps to the RDP service on the target node.
+
+4. Remotely connect to the specific node (scale set instance). You can use the user name and password that you set when you created the cluster or any other credentials you have configured.
+
+The following screenshot shows using Remote Desktop Connection to connect to the apps (Instance 0) node in a Windows cluster:
+
+![Remote Desktop Connection][sfmc-rdp-connect]
+
+<a id="lbconfig"></a>
+## Modify default Load balancer configuration
+
+### Load balancer ports
Service Fabric managed clusters creates an NSG rule in default priority range for all the load balancer (LB) ports configured under "loadBalancingRules" section under *ManagedCluster* properties. This rule opens LB ports for inbound traffic from the internet.
Service Fabric managed clusters creates an NSG rule in default priority range fo
} ```
-## Load balancer probes
+### Load balancer probes
-Service Fabric managed clusters automatically creates load balancer probes for fabric gateway ports as well as all ports configured under the "loadBalancingRules" section of managed cluster properties.
+Service Fabric managed clusters automatically creates load balancer probes for fabric gateway ports and all ports configured under the `loadBalancingRules` section of managed cluster properties.
```json {
Service Fabric managed clusters automatically creates load balancer probes for f
} ```
-## Next steps
+<a id="ipv6"></a>
+## Enable IPv6 (preview)
+Managed clusters do not enable IPv6 by default. This feature will enable full dual stack IPv4/IPv6 capability from the Load Balancer frontend to the backend resources. Any changes you make to the managed cluster load balancer config or NSG rules will affect both the IPv4 and IPv6 routing.
-[Service Fabric managed cluster configuration options](how-to-managed-cluster-configuration.md)
+> [!NOTE]
+> This setting is not available in portal and cannot be changed once the cluster is created
+1. Set the following property on a Service Fabric managed cluster resource.
+ ```json
+ "apiVersion": "2021-07-01-preview",
+ "type": "Microsoft.ServiceFabric/managedclusters",
+ ...
+ "properties": {
+ "enableIpv6": true
+ },
+ }
+ ```
+
+2. Deploy your IPv6 enabled managed cluster. Customize the [sample template](https://raw.githubusercontent.com/Azure-Samples/service-fabric-cluster-templates/SF-Managed-Standard-SKU-2-NT-IPv6/AzureDeploy.json) as needed or build your own.
+ In the following example, we'll create a resource group called `MyResourceGroup` in `westus` and deploy a cluster with this feature enabled.
+ ```powershell
+ New-AzResourceGroup -Name MyResourceGroup -Location westus
+ New-AzResourceGroupDeployment -Name deployment -ResourceGroupName MyResourceGroup -TemplateFile AzureDeploy.json
+ ```
+ After deployment, your clusters virtual network and resources will be dual-stack. As a result, the clusters frontend load balancer will have a unique dns address created for example, `mycluster-ipv6.southcentralus.cloudapp.azure.com` that is associated to a public IPv6 address on the Azure Load Balancer and private IPv6 addresses on the VMs.
++
+<a id="byovnet"></a>
+## Bring your own virtual network (preview)
+This feature allows customers to use an existing virtual network by specifying a dedicated subnet the managed cluster will deploy its resources into. This can be useful if you already have a configured VNet and subnet with related security policies and traffic routing that you want to use. After you deploy to an existing virtual network, it's easy to use or incorporate other networking features, like Azure ExpressRoute, Azure VPN Gateway, a network security group, and virtual network peering. Additionally, you can [bring your own Azure Load balancer](#byolb) if needed also.
+
+> [!NOTE]
+> This setting cannot be changed once the cluster is created and the managed cluster will assign a NSG to the provided subnet. Do not override the NSG assignment or traffic may break.
+
+**To bring your own virtual network:**
+
+1. Get the service `Id` from your subscription for Service Fabric Resource Provider application.
+ ```powershell
+ Login-AzAccount
+ Select-AzSubscription -SubscriptionId <SubId>
+ Get-AzADServicePrincipal -DisplayName "Azure Service Fabric Resource Provider"
+ ```
+
+ > [!NOTE]
+ > Make sure you are in the correct subscription, the principal ID will change if the subscription is in a different tenant.
+
+ ```powershell
+ ServicePrincipalNames : {74cb6831-0dbb-4be1-8206-fd4df301cdc2}
+ ApplicationId : 74cb6831-0dbb-4be1-8206-fd4df301cdc2
+ ObjectType : ServicePrincipal
+ DisplayName : Azure Service Fabric Resource Provider
+ Id : 00000000-0000-0000-0000-000000000000
+ ```
+
+ Note the **Id** of the previous output as **principalId** for use in a later step
+
+ |Role definition name|Role definition ID|
+ |-|-|
+ |Network Contributor|4d97b98b-1d4f-4787-a291-c67834d212e7|
+
+ Note the `Role definition name` and `Role definition ID` property values for use in a later step
+
+2. Add a role assignment to the Service Fabric Resource Provider application. Adding a role assignment is a one time action. You add the role by running the following PowerShell commands or by configuring an Azure Resource Manager (ARM) template as detailed below.
+
+ In the following steps, we start with an existing virtual network named ExistingRG-vnet, in the ExistingRG resource group. The subnet is named default.
+
+ Obtain the required info from the existing VNet.
+
+ ```powershell
+ Login-AzAccount
+ Select-AzSubscription -SubscriptionId <SubId>
+ Get-AzVirtualNetwork -Name ExistingRG-vnet -ResourceGroupName ExistingRG
+ ```
+ Note the following subnet name and `Id` property value that is returned from the `Subnets` section in the response you'll use in later steps.
+
+ ```JSON
+ Subnets:[
+ {
+ ...
+ "Id": "/subscriptions/<subscriptionId>/resourceGroups/Existing-RG/providers/Microsoft.Network/virtualNetworks/ExistingRG-vnet/subnets/default"
+ }]
+ ```
+
+ Run the following PowerShell command using the principal ID, role definition name from step 2, and assignment scope `Id` obtained above:
+ ```powershell
+ New-AzRoleAssignment -PrincipalId 00000000-0000-0000-0000-000000000000 -RoleDefinitionName "Network Contributor" -Scope "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/<subnetName>"
+ ```
+
+ Or you can add the role assignment by using an Azure Resource Manager (ARM) template configured with proper values for `principalId`, `roleDefinitionId`, `vnetName`, and `subnetName`:
+
+ ```JSON
+ "type": "Microsoft.Authorization/roleAssignments",
+ "apiVersion": "2020-04-01-preview",
+ "name": "[parameters('VNetRoleAssignmentID')]",
+ "scope": "[concat('Microsoft.Network/virtualNetworks/', parameters('vnetName'), '/subnets/', parameters('subnetName'))]",
+ "dependsOn": [
+ "[concat('Microsoft.Network/virtualNetworks/', parameters('vnetName'))]"
+ ],
+ "properties": {
+ "roleDefinitionId": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/4d97b98b-1d4f-4787-a291-c67834d212e7')]",
+ "principalId": "00000000-0000-0000-0000-000000000000"
+ }
+ ```
+ > [!NOTE]
+ > VNetRoleAssignmentID has to be a [GUID](../azure-resource-manager/templates/template-functions-string.md#examples-16). If you deploy a template again including this role assignment, make sure the GUID is the same as the one originally used. We suggest you run this isolated or remove this resource from the cluster template post-deployment as it just needs to be created once.
+
+ Here is a full sample [Azure Resource Manager (ARM) template that creates a VNet subnet and does role assignment](https://raw.githubusercontent.com/Azure-Samples/service-fabric-cluster-templates/SF-Managed-Standard-SKU-2-NT-BYOVNET/SFMC-VNet-RoleAssign.json) you can use for this step.
+
+3. Configure the `subnetId` property for the cluster deployment after the role is set up as shown below:
+
+ ```JSON
+ "resources": [
+ {
+ "apiVersion": "2021-07-01-preview",
+ "type": "Microsoft.ServiceFabric/managedclusters",
+ ...
+ },
+ "properties": {
+ "subnetId": "subnetId",
+ ...
+ }
+ ```
+ See the [bring your own VNet cluster sample template](https://raw.githubusercontent.com/Azure-Samples/service-fabric-cluster-templates/SF-Managed-Standard-SKU-2-NT-BYOVNET/AzureDeploy.json) or customize your own.
+
+4. Deploy the configured managed cluster Azure Resource Manager (ARM) template.
+
+ In the following example, we'll create a resource group called `MyResourceGroup` in `westus` and deploy a cluster with this feature enabled.
+ ```powershell
+ New-AzResourceGroup -Name MyResourceGroup -Location westus
+ New-AzResourceGroupDeployment -Name deployment -ResourceGroupName MyResourceGroup -TemplateFile AzureDeploy.json
+ ```
+
+ When you bring your own VNet subnet the public endpoint is still created and managed by the resource provider, but in the configured subnet. The feature does not allow you to specify the public ip/re-use static ip on the Azure Load Balancer. You can [bring your own Azure Load Balancer](#byolb) in concert with this feature or by itself if you require those or other load balancer scenarios that aren't natively supported.
+
+<a id="byolb"></a>
+## Bring your own Azure Load Balancer (preview)
+Managed clusters create an Azure Load Balancer and fully qualified domain name with a static public IP for both the primary and secondary node types. This feature allows you to create or reuse an Azure Load Balancer for secondary node types for both inbound and outbound traffic. When you bring your own Azure Load Balancer, you can:
+
+* Use a pre-configured Load Balancer static IP address for either private or public traffic
+* Map a Load Balancer to a specific node type
+* Configure NSG rules per node type because each node type is deployed in its own VNET
+* Maintain existing policies and controls you may have in place
+
+> [!NOTE]
+> You can not switch from default to custom after cluster deployment for a node type, but you can modify custom load balancer configuration post-deployment.
+
+**Feature Requirements**
+ * Basic and Standard SKU Azure Load Balancer types are supported
+ * You must have backend and NAT pools configured on the existing Azure Load Balancer. See full [create and assign role sample here](https://raw.githubusercontent.com/Azure-Samples/service-fabric-cluster-templates/SF-Managed-Standard-SKU-2-NT-BYOLB/createlb-and-assign-role) for an example.
+
+Here are a couple example scenarios customers may use this for:
+
+In this example, a customer wants to route traffic through an existing Azure Load Balancer configured with an existing static ip address to two node types.
+![Bring your own Load Balancer example 1][sfmc-byolb-example-1]
+
+In this example, a customer wants to route traffic through existing Azure Load Balancers to help them manage traffic flow to their applications independently that live on separate node types. When set up like this example, each node type will be behind it's own NSG that you can manage.
+![Bring your own Load Balancer example 2][sfmc-byolb-example-2]
+
+To configure bring your own load balancer:
+
+1. Get the service `Id` from your subscription for Service Fabric Resource Provider application:
+
+ ```powershell
+ Login-AzAccount
+ Select-AzSubscription -SubscriptionId <SubId>
+ Get-AzADServicePrincipal -DisplayName "Azure Service Fabric Resource Provider"
+ ```
+
+ > [!NOTE]
+ > Make sure you are in the correct subscription, the principal ID will change if the subscription is in a different tenant.
+
+ ```powershell
+ ServicePrincipalNames : {74cb6831-0dbb-4be1-8206-fd4df301cdc2}
+ ApplicationId : 74cb6831-0dbb-4be1-8206-fd4df301cdc2
+ ObjectType : ServicePrincipal
+ DisplayName : Azure Service Fabric Resource Provider
+ Id : 00000000-0000-0000-0000-000000000000
+ ```
+
+ Note the **Id** of the previous output as **principalId** for use in a later step
+
+ |Role definition name|Role definition ID|
+ |-|-|
+ |Network Contributor|4d97b98b-1d4f-4787-a291-c67834d212e7|
+
+ Note the `Role definition name` and `Role definition ID` property values for use in a later step
+
+2. Add a role assignment to the Service Fabric Resource Provider application. Adding a role assignment is a one time action. You add the role by running the following PowerShell commands or by configuring an Azure Resource Manager (ARM) template as detailed below.
+
+ In the following steps, we start with an existing load balancer named Existing-LoadBalancer1, in the Existing-RG resource group. The subnet is named default.
+
+ Obtain the required `Id` property info from the existing Azure Load Balancer. We'll
+
+ ```powershell
+ Login-AzAccount
+ Select-AzSubscription -SubscriptionId <SubId>
+ Get-AzLoadBalancer -Name "Existing-LoadBalancer1" -ResourceGroupName "Existing-RG"
+ ```
+ Note the following `Id` you'll use in the next step:
+ ```JSON
+ {
+ ...
+ "Id": "/subscriptions/<subscriptionId>/resourceGroups/Existing-RG/providers/Microsoft.Network/loadBalancers/Existing-LoadBalancer1"
+ }
+ ```
+ Run the following PowerShell command using the principal ID, role definition name from step 2, and assignment scope `Id` you just obtained:
+
+ ```powershell
+ New-AzRoleAssignment -PrincipalId 00000000-0000-0000-0000-000000000000 -RoleDefinitionName "Network Contributor" -Scope "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/loadBalancers/<LoadBalancerName>"
+ ```
+
+ Or you can add the role assignment by using an Azure Resource Manager (ARM) template configured with proper values for `principalId`, `roleDefinitionId`, `vnetName`, and `subnetName`:
+
+ ```JSON
+ "type": "Microsoft.Authorization/roleAssignments",
+ "apiVersion": "2020-04-01-preview",
+ "name": "[parameters('loadBalancerRoleAssignmentID')]",
+ "scope": "[concat('Microsoft.Network/loadBalancers/', variables('lbName'))]",
+ "dependsOn": [
+ "[concat('Microsoft.Network/loadBalancers/', variables('lbName'))]"
+ ],
+ "properties": {
+ "roleDefinitionId": "[concat('/subscriptions/', subscription().subscriptionId, '/providers/Microsoft.Authorization/roleDefinitions/4d97b98b-1d4f-4787-a291-c67834d212e7')]",
+ "principalId": "00000000-0000-0000-0000-000000000000"
+ }
+ ```
+ > [!NOTE]
+ > loadBalancerRoleAssignmentID has to be a [GUID](../azure-resource-manager/templates/template-functions-string.md#examples-16). If you deploy a template again including this role assignment, make sure the GUID is the same as the one originally used. We suggest you run this isolated or remove this resource from the cluster template post-deployment as it just needs to be created once.
+
+3. Configure required outbound connectivity. All nodes must be able to route outbound on port 443 to ServiceFabric resource provider. You can use the `ServiceFabric` service tag in your NSG to restrict the traffic destination to the Azure endpoint.
+
+4. Optionally configure an inbound application port and related probe on your existing Azure Load Balancer.
+
+5. Optionally configure the managed cluster NSG rules applied to the node type to allow any required traffic that you've configured on the Azure Load Balancer or traffic will be blocked.
+
+ See the [bring your own load balancer sample Azure Resource Manager (ARM) template](https://raw.githubusercontent.com/Azure-Samples/service-fabric-cluster-templates/SF-Managed-Standard-SKU-2-NT-BYOLB/AzureDeploy.json) for an example on how to open inbound rules.
+
+6. Deploy the configured managed cluster ARM Template
+
+ In the following example, we'll create a resource group called `MyResourceGroup` in `westus` and deploy a cluster with this feature enabled.
+ ```powershell
+ New-AzResourceGroup -Name MyResourceGroup -Location westus
+ New-AzResourceGroupDeployment -Name deployment -ResourceGroupName MyResourceGroup -TemplateFile AzureDeploy.json
+ ```
+
+ After deployment, your secondary node type is configured to use the specified load balancer for inbound and outbound traffic. The Service Fabric client connection and gateway endpoints will still point to the public DNS of the managed cluster primary node type static IP address.
++
+## Next steps
+[Service Fabric managed cluster configuration options](how-to-managed-cluster-configuration.md)
[Service Fabric managed clusters overview](overview-managed-cluster.md) <!--Image references--> [Inbound-NAT-Rules]: ./media/how-to-managed-cluster-networking/inbound-nat-rules.png [sfmc-rdp-connect]: ./media/how-to-managed-cluster-networking/sfmc-rdp-connect.png
+[sfmc-byolb-example-1]: ./media/how-to-managed-cluster-networking/sfmc-byolb-scenario-1.png
+[sfmc-byolb-example-2]: ./media/how-to-managed-cluster-networking/sfmc-byolb-scenario-2.png
service-fabric How To Managed Cluster Stateless Node Type https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-stateless-node-type.md
Title: Deploy a Service Fabric managed cluster with stateless node types description: Learn how to create and deploy stateless node types in Service Fabric managed clusters Previously updated : 5/10/2021 Last updated : 8/23/2021 # Deploy a Service Fabric managed cluster with stateless node types
Service Fabric node types come with an inherent assumption that at some point of
* Primary node types cannot be configured to be stateless * Stateless node types require an API version of **2021-05-01** or later-
+* This will automatically set the **multipleplacementgroup** property to **true** which you can [learn more here](how-to-managed-cluster-large-virtual-machine-scale-sets.md)
+* This enables support for up to 1000 nodes for the given node type
Sample templates are available: [Service Fabric Stateless Node types template](https://github.com/Azure-Samples/service-fabric-cluster-templates)
service-fabric How To Managed Cluster Upgrades https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-upgrades.md
Title: Upgrading Azure Service Fabric managed clusters description: Learn about options for upgrading your Azure Service Fabric managed cluster Previously updated : 06/16/2021 Last updated : 08/23/2021 # Manage Service Fabric managed cluster upgrades
-An Azure Service Fabric cluster is a resource you own, but it's partly managed by Microsoft. Here's how to manage when and how Microsoft updates your Azure Service Fabric managed cluster.
+An Azure Service Fabric cluster is a resource you own, but it's partly managed by Microsoft. Here's how to manage when and how Microsoft updates your Azure Service Fabric managed cluster runtime.
## Set upgrade mode
-Azure Service Fabric managed clusters are set by default to receive automatic Service Fabric upgrades as they are released by Microsoft using a [wave deployment](#wave-deployment-for-automatic-upgrades) strategy. As an alternative, you can setup manual mode upgrades in which you choose from a list of currently supported versions. You can configure these settings either through the *Fabric upgrades* control in Azure portal or the `ClusterUpgradeMode` setting in your cluster deployment template.
+Azure Service Fabric managed clusters are set by default to receive automatic Service Fabric upgrades as they are released by Microsoft using a [wave deployment](#wave-deployment-for-automatic-upgrades) strategy. As an alternative, you can set up manual mode upgrades in which you choose from a list of currently supported versions. You can configure these settings either through the *Fabric upgrades* control in Azure portal or the `ClusterUpgradeMode` setting in your cluster deployment template.
## Wave deployment for automatic upgrades
With wave deployment, you can create a pipeline for upgrading your test, stage,
To select a wave deployment for automatic upgrade, first determine which wave to assign your cluster: * **Wave 0** (`Wave0`): Clusters are updated as soon as a new Service Fabric build is released.
-* **Wave 1** (`Wave1`): Clusters are updated after Wave 0 to allow for bake time. This occurs after a minimum of 7 days after Wave 0
-* **Wave 2** (`Wave2`): Clusters are updated last to allow for further bake time. This occurs after a minimum of 14 days after Wave 0
+* **Wave 1** (`Wave1`): Clusters are updated after Wave 0 to allow for bake time. Wave 1 occurs after a minimum of 7 days after Wave 0.
+* **Wave 2** (`Wave2`): Clusters are updated last to allow for further bake time. Wave 2 occurs after a minimum of 14 days after Wave 0.
## Set the Wave for your cluster
If a rollback occurs, you'll need to fix the issues that resulted in the rollbac
#### Automatic upgrade with wave deployment
-To configure Automatic upgrades and the wave deployment, simply add/validate `ClusterUpgradeMode` is set to `Automatic` and the `upgradeWave` property is defined with one of the wave values listed above in your Resource Manager template.
+To configure Automatic upgrades and the wave deployment, simply add/validate `ClusterUpgradeMode` is set to `Automatic` and the `clusterUpgradeCadence` property is defined with one of the wave values listed above in your Resource Manager template.
```json {
To configure Automatic upgrades and the wave deployment, simply add/validate `Cl
"type": "Microsoft.ServiceFabric/managedClusters", "properties": { "ClusterUpgradeMode": "Automatic",
- "upgradeWave": "Wave1",
+ "clusterUpgradeCadence": "Wave1",
} } ```
-Once you deploy the updated template, your cluster will be enrolled in the specified wave for the next upgrade period and after that.
+Once you deploy the updated template, your cluster will be enrolled in the specified wave for automatic upgrades.
## Query for supported cluster versions
service-fabric How To Managed Cluster Vmss Extension https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-cluster-vmss-extension.md
Title: Add a virtual machine scale set extension to a Service Fabric managed cluster node type description: Here's how to add a virtual machine scale set extension a Service Fabric managed cluster node type Previously updated : 5/10/2021 Last updated : 8/02/2021
-# Add a virtual machine scale set extension to a Service Fabric managed cluster node type
+# Virtual machine scale set extension support on Service Fabric managed cluster node type(s)
-Each node type in a Service Fabric managed cluster is backed by a virtual machine scale set. This enables you to add [virtual machine scale set extensions](../virtual-machines/extensions/overview.md) to your Service Fabric managed cluster node types.
+Each node type in a Service Fabric managed cluster is backed by a virtual machine scale set. This enables you to add [virtual machine scale set extensions](../virtual-machines/extensions/overview.md) to your Service Fabric managed cluster node types. Extensions are small applications that provide post-deployment configuration and automation on Azure VMs. The Azure platform hosts many extensions covering VM configuration, monitoring, security, and utility applications. Publishers take an application, wrap it into an extension, and simplify the installation. All you need to do is provide mandatory parameters.
-You can add a virtual machine scale set extension to a node type using the [Add-AzServiceFabricManagedNodeTypeVMExtension](/powershell/module/az.servicefabric/add-azservicefabricmanagednodetypevmextension) PowerShell command.
+## Add a virtual machine scale set extension
+You can add a virtual machine scale set extension to a Service Fabric managed cluster node type using the [Add-AzServiceFabricManagedNodeTypeVMExtension](/powershell/module/az.servicefabric/add-azservicefabricmanagednodetypevmextension) PowerShell command.
-Alternately, you can a virtual machine scale set extension on a Service Fabric managed cluster node type in your Azure Resource Manager template, for example:
+Alternately, you can add a virtual machine scale set extension on a Service Fabric managed cluster node type in your Azure Resource Manager template, for example:
```json {
service-fabric How To Managed Identity Service Fabric App Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/how-to-managed-identity-service-fabric-app-code.md
It is recommended that requests failed due to throttling are retried with an exp
See [Azure services that support Azure AD authentication](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md) for a list of resources that support Azure AD, and their respective resource IDs. ## Next steps
-* [Deploy an Azure Service Fabric application with a system-assigned managed identity](./how-to-deploy-service-fabric-application-system-assigned-managed-identity.md)
-* [Deploy an Azure Service Fabric application with a user-assigned managed identity](./how-to-deploy-service-fabric-application-user-assigned-managed-identity.md)
-* [Grant an Azure Service Fabric application access to other Azure resources](./how-to-grant-access-other-resources.md)
+* [Deploy a Service Fabric application with Managed Identity to a managed cluster](how-to-managed-cluster-application-managed-identity.md)
+* [Deploy a Service Fabric application with a system-assigned Managed Identity to a classic cluster](./how-to-deploy-service-fabric-application-system-assigned-managed-identity.md)
+* [Deploy a Service Fabric application with a user-assigned Managed Identity to a classic cluster](./how-to-deploy-service-fabric-application-user-assigned-managed-identity.md)
+* [Granting a Service Fabric application's Managed Identity access to Azure resources](./how-to-grant-access-other-resources.md)
* [Explore a sample application using Service Fabric Managed Identity](https://github.com/Azure-Samples/service-fabric-managed-identity)
service-fabric Overview Managed Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/overview-managed-cluster.md
In terms of size and complexity, the ARM template for a Service Fabric managed c
| Storage account(s) | | | Virtual network | |
-Service Fabric managed clusters provide a number of advantages over traditional clusters:
+## Service Fabric managed cluster advantages
+Service Fabric managed clusters provide a number of advantages over traditional clusters including:
**Simplified cluster deployment and management** - Deploy and manage a single Azure resource-- Certificate management and autorotation
+- Cluster certificate management and 90 day autorotation
- Simplified scaling operations
+- Automatic OS Image upgrade support
+- In-Place OS SKU change support
**Prevent operational errors** - Prevent configuration mismatches with underlying resources
Service Fabric managed clusters are available in both Basic and Standard SKUs.
| - | -- | -- | | Network resource (SKU for [Load Balancer](../load-balancer/skus.md), [Public IP](../virtual-network/public-ip-addresses.md)) | Basic | Standard | | Min node (VM instance) count | 3 | 5 |
-| Max node count per node type | 100 | 100 |
+| Max node count per node type | 100 | 1000 |
| Max node type count | 1 | 20 | | Add/remove node types | No | Yes | | Zone redundancy | No | Yes |
To get started with Service Fabric managed clusters, try the quickstart:
> [!div class="nextstepaction"] > [Create a Service Fabric managed cluster](quickstart-managed-cluster-template.md)
+And reference [how to configure your managed cluster](how-to-managed-cluster-configuration.md)
+ [sf-composition]: ./media/overview-managed-cluster/sfrp-composition-resource.png [sf-encapsulation]: ./media/overview-managed-cluster/sfrp-encapsulated-resource.png
service-fabric Service Fabric Application Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-application-scenarios.md
Consider using the Service Fabric platform for the following types of applicatio
## Application design case studies
-Case studies that show how Service Fabric is used to design applications are published on the [Customer stories](https://customers.microsoft.com/search?sq=%22Azure%20Service%20Fabric%22&ff=&p=2&so=story_publish_date%20desc) and [Microservices in Azure](https://azure.microsoft.com/solutions/microservice-applications/) sites.
+Case studies that show how Service Fabric is used to design applications are published on the [Customer stories](https://customers.microsoft.com/en-us/search?sq=%22Azure%20Service%20Fabric%22&ff=&p=2&so=story_publish_date%20desc) and [Microservices in Azure](https://azure.microsoft.com/solutions/microservice-applications/) sites.
## Designing applications composed of stateless and stateful microservices
Here's an example application that uses stateful
* [Partition services](service-fabric-concepts-partitioning.md) [Image1]: media/service-fabric-application-scenarios/AppwithStatelessServices.png
-[Image2]: media/service-fabric-application-scenarios/AppwithStatefulServices.png
+[Image2]: media/service-fabric-application-scenarios/AppwithStatefulServices.png
service-fabric Tutorial Managed Cluster Deploy App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/tutorial-managed-cluster-deploy-app.md
Remove-ServiceFabricApplication fabric:/Voting
In this step, we deployed an application to a Service Fabric managed cluster. To learn more about application deployment options, see:
-* [Deploy managed cluster application secrets](how-to-managed-cluster-application-secrets.md)
-* [Deploy managed cluster applications using ARM templates](how-to-managed-cluster-app-deployment-template.md)
-* [Deploy managed cluster applications with managed identity](how-to-managed-cluster-application-managed-identity.md)
+* [Deploy application secrets to a managed cluster](how-to-managed-cluster-application-secrets.md)
+* [Deploy an application to a managed cluster using Azure Resource Manager](how-to-managed-cluster-app-deployment-template.md)
+* [Deploy an application with managed identity to a managed cluster](how-to-managed-cluster-application-managed-identity.md)
+ To learn more about managed cluster configuration options, see:
service-fabric Tutorial Managed Cluster Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/tutorial-managed-cluster-deploy.md
Title: Deploy a Service Fabric managed cluster description: In this tutorial, you will deploy a Service Fabric managed cluster for testing. Previously updated : 5/10/2021 Last updated : 8/23/2021
This part of the series covers how to:
> * Connect to your Azure account > * Create a new resource group > * Deploy a Service Fabric managed cluster
-> * Add a primary node type to the cluster
+> * Add a Primary node type to the cluster
## Prerequisites
Set-AzContext -SubscriptionId <your-subscription>
Next, create the resource group for the Managed Service Fabric cluster, replacing `<your-rg>` and `<location>` with the desired group name and location.
-> [!NOTE]
-> Supported regions for the public preview include `centraluseuap`, `eastus2euap`, `eastasia`, `northeurope`, `westcentralus`, and `eastus2`.
- ```powershell $resourceGroup = "myResourceGroup" $location = "EastUS2"
service-fabric Tutorial Managed Cluster Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/tutorial-managed-cluster-scale.md
Title: Scale out a Service Fabric managed cluster description: In this tutorial, learn how to scale out a node type of a Service Fabric managed cluster. Previously updated : 5/10/2021 Last updated : 8/23/2021
This part of the series covers how to:
Change the instance count to increase or decrease the number of nodes on the node type that you would like to scale. You can find node type names in the Azure Resource Manager template (ARM template) from your cluster deployment, or in the Service Fabric Explorer. > [!NOTE]
-> If the node type is primary you will not be able to go below 3 nodes for a Basic SKU cluster, and 5 nodes for a Standard SKU cluster.
+> For the Primary node type, you will not be able to go below 3 nodes for a Basic SKU cluster, and 5 nodes for a Standard SKU cluster.
```powershell $resourceGroup = "myResourceGroup"
spring-cloud How To Access Data Plane Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-access-data-plane-azure-ad-rbac.md
Previously updated : 02/04/2021 Last updated : 08/25/2021
This article explains how to access the Spring Cloud Config Server and Spring Cl
## Assign role to Azure AD user/group, MSI, or service principal
-Assign the [azure-spring-cloud-data-reader](../role-based-access-control/built-in-roles.md#azure-spring-cloud-data-reader) role to the [user | group | service-principal | managed-identity] at [management-group | subscription | resource-group | resource] scope.
+Assign the role to the [user | group | service-principal | managed-identity] at [management-group | subscription | resource-group | resource] scope.
+
+| Role name | Description |
+|-||
+| Azure Spring Cloud Config Server Reader | Allow read access to Azure Spring Cloud Config Server. |
+| Azure Spring Cloud Config Server Contributor | Allow read, write, and delete access to Azure Spring Cloud Config Server. |
+| Azure Spring Cloud Service Registry Reader | Allow read access to Azure Spring Cloud Service Registry. |
+| Azure Spring Cloud Service Registry Contributor | Allow read, write, and delete access to Azure Spring Cloud Service Registry. |
For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). ## Access Config Server and Service Registry Endpoints
-After the Azure Spring Cloud Data Reader role is assigned, customers can access the Spring Cloud Config Server and the Spring Cloud Service Registry endpoints. Use the following procedures:
+After the role is assigned, the assignee can access the Spring Cloud Config Server and the Spring Cloud Service Registry endpoints using the following procedures:
-1. Get an access token. After an Azure AD user is assigned the Azure Spring Cloud Data Reader role, customers can use the following commands to log in to Azure CLI with user, service principal, or managed identity to get an access token. For details, see [Authenticate Azure CLI](/cli/azure/authenticate-azure-cli).
+1. Get an access token. After an Azure AD user is assigned the role, they can use the following commands to sign in to Azure CLI with user, service principal, or managed identity to get an access token. For details, see [Authenticate Azure CLI](/cli/azure/authenticate-azure-cli).
```azurecli az login az account get-access-token ```
-2. Compose the endpoint. We support default endpoints of the Spring Cloud Config Server and Spring Cloud Service Registry managed by Azure Spring Cloud. For more information, see [Production ready endpoints](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#production-ready-endpoints). Customers can also get a full list of supported endpoints of the Spring Cloud Config Server and Spring Cloud Service Registry managed by Azure Spring Cloud by accessing endpoints:
+1. Compose the endpoint. We support the default endpoints of the Spring Cloud Config Server and Spring Cloud Service Registry managed by Azure Spring Cloud.
+
+ * *'https://SERVICE_NAME.svc.azuremicroservices.io/eureka/{path}'*
+ * *'https://SERVICE_NAME.svc.azuremicroservices.io/config/{path}'*
+
+ >[!NOTE]
+ > If you're using Azure China, replace `*.azuremicroservices.io` with `*.microservices.azure.cn`. For more information, see the section [Check endpoints in Azure](/azure/china/resources-developer-guide#check-endpoints-in-azure) in the [Azure China developer guide](/azure/china/resources-developer-guide).
+
+1. Access the composed endpoint with the access token. Put the access token in a header to provide authorization: `--header 'Authorization: Bearer {TOKEN_FROM_PREVIOUS_STEP}`.
+
+ For example:
+
+ a. Access an endpoint like *'https://SERVICE_NAME.svc.azuremicroservices.io/config/actuator/health'* to see the health status of Config Server.
+
+ b. Access an endpoint like *'https://SERVICE_NAME.svc.azuremicroservices.io/eureka/eureka/apps'* to see the registered apps in Spring Cloud Service Registry (Eureka here).
+
+ If the response is *401 Unauthorized*, check to see if the role is successfully assigned. It will take several minutes for the role to take effect or to verify that the access token has not expired.
+
+For more information about actuator endpoint, see [Production ready endpoints](https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#production-ready-endpoints).
+
+For Eureka endpoints, see [Eureka-REST-operations](https://github.com/Netflix/eureka/wiki/Eureka-REST-operations)
+
+For config server endpoints and detailed path information, see [ResourceController.java](https://github.com/spring-cloud/spring-cloud-config/blob/main/spring-cloud-config-server/src/main/java/org/springframework/cloud/config/server/resource/ResourceController.java) and [EncryptionController.java](https://github.com/spring-cloud/spring-cloud-config/blob/main/spring-cloud-config-server/src/main/java/org/springframework/cloud/config/server/encryption/EncryptionController.java).
+
+## Register Spring Boot apps to Spring Cloud Config Server and Service Registry managed by Azure Spring Cloud
+
+After the role is assigned, you can register Spring Boot apps to Spring Cloud Config Server and Service Registry managed by Azure Spring Cloud with Azure AD token authentication. Both Config Server and Service Registry support [custom REST template](https://cloud.spring.io/spring-cloud-config/reference/html/#custom-rest-template) to inject the bearer token for authentication.
+
+For more information, see the samples [Access Azure Spring Cloud managed Config Server](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/custom-config-server-client) and [Access Azure Spring Cloud managed Spring Cloud Service Registry](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/custom-eureka-client). The following sections explain some important details in these samples.
+
+**In *AccessTokenManager.java*:**
+
+`AccessTokenManager` is responsible for getting an access token from Azure AD. Configure the service principal's sign-in information in the *application.properties* file and initialize `ApplicationTokenCredentials` to get the token. You can find this file in both samples.
+
+```java
+prop.load(in);
+tokenClientId = prop.getProperty("access.token.clientId");
+String tenantId = prop.getProperty("access.token.tenantId");
+String secret = prop.getProperty("access.token.secret");
+String clientId = prop.getProperty("access.token.clientId");
+credentials = new ApplicationTokenCredentials(
+ clientId, tenantId, secret, AzureEnvironment.AZURE);
+```
+
+**In *CustomConfigServiceBootstrapConfiguration.java*:**
+
+`CustomConfigServiceBootstrapConfiguration` implements the custom REST template for Config Server and injects the token from Azure AD as `Authorization` headers. You can find this file in the [Config Server sample](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/custom-config-server-client).
+
+```java
+public class RequestResponseHandlerInterceptor implements ClientHttpRequestInterceptor {
+
+ @Override
+ public ClientHttpResponse intercept(HttpRequest request, byte[] body, ClientHttpRequestExecution execution) throws IOException {
+ String accessToken = AccessTokenManager.getToken();
+ request.getHeaders().remove(AUTHORIZATION);
+ request.getHeaders().add(AUTHORIZATION, "Bearer " + accessToken);
+
+ ClientHttpResponse response = execution.execute(request, body);
+ return response;
+ }
+
+}
+```
+
+**In *CustomRestTemplateTransportClientFactories.java*:**
+
+The previous two classes are for the implementation of the custom REST template for Spring Cloud Service Registry. The `intercept` part is the same as in the Config Server above. Be sure to add `factory.mappingJacksonHttpMessageConverter()` to the message converters. You can find this file in the [Spring Cloud Service Registry sample](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/custom-eureka-client).
- * *'https://SERVICE_NAME.svc.azuremicroservices.io/eureka/actuator/'*
- * *'https://SERVICE_NAME.svc.azuremicroservices.io/config/actuator/'*
+```java
+private RestTemplate customRestTemplate() {
+ /*
+ * Inject your custom rest template
+ */
+ RestTemplate restTemplate = new RestTemplate();
+ restTemplate.getInterceptors()
+ .add(new RequestResponseHandlerInterceptor());
+ RestTemplateTransportClientFactory factory = new RestTemplateTransportClientFactory();
->[!NOTE]
-> If you are using Azure China, please replace `*.azuremicroservices.io` with `*.microservices.azure.cn`, [learn more](/azure/china/resources-developer-guide#check-endpoints-in-azure).
+ restTemplate.getMessageConverters().add(0, factory.mappingJacksonHttpMessageConverter());
-3. Access the composed endpoint with the access token. Put the access token in a header to provide authorization: `--header 'Authorization: Bearer {TOKEN_FROM_PREVIOUS_STEP}`. Only the "GET" method is supported.
+ return restTemplate;
+}
+```
- For example, access an endpoint like *'https://SERVICE_NAME.svc.azuremicroservices.io/eureka/actuator/health'* to see the health status of eureka.
+If you're running applications on a Kubernetes cluster, we recommend that you use an IP address to register Spring Cloud Service Registry for access.
- If the response is *401 Unauthorized*, check to see if the role is successfully assigned. It will take several minutes for the role take effect or verify that the access token has not expired.
+```properties
+eureka.instance.prefer-ip-address=true
+```
## Next steps
spring-cloud How To Config Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/how-to-config-server.md
All configurable properties used to set up private Git repository with basic aut
| `default-label` | No | The default label of the Git repository, should be the *branch name*, *tag name*, or *commit-id* of the repository. | | `search-paths` | No | An array of strings used to search subdirectories of the Git repository. | | `username` | No | The username that's used to access the Git repository server, _required_ when the Git repository server supports `Http Basic Authentication`. |
-| `password` | No | The password used to access the Git repository server, _required_ when the Git repository server supports `Http Basic Authentication`. |
+| `password` | No | The password or personal access token used to access the Git repository server, _required_ when the Git repository server supports `Http Basic Authentication`. |
> [!NOTE]
-> Many `Git` repository servers support the use of tokens rather than passwords for HTTP Basic Authentication. Some repositories, such as GitHub, allow tokens to persist indefinitely. However, some Git repository servers, including Azure DevOps, force tokens to expire in a few hours. Repositories that cause tokens to expire should not use token-based authentication with Azure Spring Cloud.
+> Many `Git` repository servers support the use of tokens rather than passwords for HTTP Basic Authentication. Some repositories allow tokens to persist indefinitely. However, some Git repository servers, including Azure DevOps Server, force tokens to expire in a few hours. Repositories that cause tokens to expire should not use token-based authentication with Azure Spring Cloud.
+> Github has removed support for password authentication, so you'll need to use a personal access token instead of password authentication for Github. For more information, see [Token authentication](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/).
### Git repositories with pattern
All configurable properties used to set up Git repositories with pattern are lis
| `repos."default-label"` | No | The default label of the Git repository, should be the *branch name*, *tag name*, or *commit-id* of the repository. | | `repos."search-paths`" | No | An array of strings used to search subdirectories of the Git repository. | | `repos."username"` | No | The username that's used to access the Git repository server, _required_ when the Git repository server supports `Http Basic Authentication`. |
-| `repos."password"` | No | The password used to access the Git repository server, _required_ when the Git repository server supports `Http Basic Authentication`. |
+| `repos."password"` | No | The password or personal access token used to access the Git repository server, _required_ when the Git repository server supports `Http Basic Authentication`. |
| `repos."private-key"` | No | The SSH private key to access Git repository, _required_ when the URI starts with *git@* or *ssh://*. | | `repos."host-key"` | No | The host key of the Git repository server, should not include the algorithm prefix as covered by `host-key-algorithm`. | | `repos."host-key-algorithm"` | No | The host key algorithm, should be *ssh-dss*, *ssh-rsa*, *ecdsa-sha2-nistp256*, *ecdsa-sha2-nistp384*, or *ecdsa-sha2-nistp521*. *Required* only if `host-key` exists. |
Now that your configuration files are saved in a repository, you need to connect
![The Edit Authentication pane basic auth](media/spring-cloud-tutorial-config-server/basic-auth.png) > [!CAUTION]
- > Some Git repository servers, such as GitHub, use a *personal-token* or an *access-token*, such as a password, for **Basic Authentication**. You can use that kind of token as a password in Azure Spring Cloud, because it will never expire. But for other Git repository servers, such as Bitbucket and Azure DevOps, the *access-token* expires in one or two hours. This means that the option isn't viable when you use those repository servers with Azure Spring Cloud.
+ > Some Git repository servers use a *personal-token* or an *access-token*, such as a password, for **Basic Authentication**. You can use that kind of token as a password in Azure Spring Cloud, because it will never expire. But for other Git repository servers, such as Bitbucket and Azure DevOps Server, the *access-token* expires in one or two hours. This means that the option isn't viable when you use those repository servers with Azure Spring Cloud.
+ > Github has removed support for password authentication, so you'll need to use a personal access token instead of password authentication for Github. For more information, see [Token authentication](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/).
* **SSH**: In the **Default repository** section, in the **Uri** box, paste the repository URI, and then select the **Authentication** ("pencil" icon) button. In the **Edit Authentication** pane, in the **Authentication type** drop-down list, select **SSH**, and then enter your **Private key**. Optionally, specify your **Host key** and **Host key algorithm**. Be sure to include your public key in your Config Server repository. Select **OK**, and then select **Apply** to finish setting up your Config Server instance.
spring-cloud Monitor App Lifecycle Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/monitor-app-lifecycle-events.md
For example, when you restart your app, you can find the affected instances from
### Monitor unplanned app lifecycle events
-When your app is restarted because of unplanned events, your Azure Spring Cloud instance will show a status of **degraded** in the **Resource health** section of the Azure portal. Degraded means that your resource detected a loss in performance, although it's still available for use. Examples of unplanned events include app crash, health check failure, and system outage.
+When your app is restarted because of unplanned events, your Azure Spring Cloud instance will show a status of **degraded** in the **Resource health** section of the Azure portal. Degraded means that your resource detected a potential loss in performance, although it's still available for use. Examples of unplanned events include app crash, health check failure, and system outage.
:::image type="content" source="media/monitor-app-lifecycle-events/resource-health-detail.png" alt-text="Screenshot of the resource health pane":::
storage Archive Rehydrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/archive-rehydrate-overview.md
Previously updated : 08/11/2021 Last updated : 08/24/2021
storage Archive Rehydrate To Online Tier https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/archive-rehydrate-to-online-tier.md
Previously updated : 08/11/2021 Last updated : 08/24/2021
$ctx = (Get-AzStorageAccount `
# Change the blobΓÇÖs access tier to hot with standard priority. $blob = Get-AzStorageBlob -Container $containerName -Blob $blobName -Context $ctx
-$blob.ICloudBlob.SetStandardBlobTier("Hot", "Standard")
+$blob.BlobClient.SetAccessTier("Hot", $null, "High")
``` ### [Azure CLI](#tab/azure-cli)
az storage blob set-tier /
+## Rehydrate a large number of blobs
+
+To rehydrate a large number of blobs at one time, call the [Blob Batch](/rest/api/storageservices/blob-batch) operation to call [Set Blob Tier] as a bulk operation. For a code example that shows how to perform the batch operation, see [AzBulkSetBlobTier](/samples/azure/azbulksetblobtier/azbulksetblobtier/).
+ ## Check the status of a rehydration operation While the blob is rehydrating, you can check its status and rehydration priority using the Azure portal, PowerShell, or Azure CLI. The status property may return *rehydrate-pending-to-hot* or *rehydrate-pending-to-cool*, depending on the target tier for the rehydration operation. The rehydration priority property returns either *Standard* or *High*.
When the rehydration is complete, you can see in the Azure portal that the fully
### [PowerShell](#tab/powershell)
-To check the status and priority of a pending rehydration operation with PowerShell, call the [Get-AzStorageBlob](/powershell/module/az.storage/get-azstorageblob) command, and check the **RehydrationStatus** and **RehydratePriority** properties of the blob. If the rehydration is a copy operation, check these properties on the destination blob. Remember to replace placeholders in angle brackets with your own values:
+To check the status and priority of a pending rehydration operation with PowerShell, call the [Get-AzStorageBlob](/powershell/module/az.storage/get-azstorageblob) command, and check the **ArchiveStatus** and **RehydratePriority** properties of the blob. If the rehydration is a copy operation, check these properties on the destination blob. Remember to replace placeholders in angle brackets with your own values:
```powershell $rehydratingBlob = Get-AzStorageBlob -Container $containerName -Blob $blobName -Context $ctx
+$rehydratingBlob.BlobProperties.ArchiveStatus
$rehydratingBlob.BlobProperties.RehydratePriority
-$rehydratingBlob.ICloudBlob.Properties.RehydrationStatus
``` ### [Azure CLI](#tab/azure-cli)
storage Blob Storage Monitoring Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/blob-storage-monitoring-scenarios.md
With Azure Synapse, you can create server-less SQL pool to query log data when y
## See also - [Monitoring Azure Blob Storage](monitor-blob-storage.md).
+- [Azure Blob Storage monitoring data reference](monitor-blob-storage-reference.md)
- [Tutorial: Use Kusto queries in Azure Data Explorer and Azure Monitor](/azure/data-explorer/kusto/query/tutorial?pivots=azuredataexplorer). - [Get started with log queries in Azure Monitor](../../azure-monitor/logs/get-started-queries.md).
storage Data Lake Storage Acl Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/data-lake-storage-acl-cli.md
ACL inheritance is already available for new child items that are created under
## Get ACLs
-Get the ACL of a **directory** by using the `az storage fs access show` command.
+Get the ACL of a **directory** by using the [az storage fs access show](/cli/azure/storage/fs#az_storage_fs_show) command.
This example gets the ACL of a directory, and then prints the ACL to the console.
This example gets the ACL of a directory, and then prints the ACL to the console
az storage fs access show -p my-directory -f my-file-system --account-name mystorageaccount --auth-mode login ```
-Get the access permissions of a **file** by using the `az storage fs access show` command.
+Get the access permissions of a **file** by using the [az storage fs access show](/cli/azure/storage/fs#az_storage_fs_show) command.
This example gets the ACL of a file and then prints the ACL to the console.
This section shows you how to:
### Set an ACL
-Use the `az storage fs access set` command to set the ACL of a **directory**.
+Use the [az storage fs access set](/cli/azure/storage/fs/access#az_storage_fs_access_set) command to set the ACL of a **directory**.
This example sets the ACL on a directory for the owning user, owning group, or other users, and then prints the ACL to the console.
This example sets the *default* ACL on a directory for the owning user, owning g
az storage fs access set --acl "default:user::rw-,group::rw-,other::-wx" -p my-directory -f my-file-system --account-name mystorageaccount --auth-mode login ```
-Use the `az storage fs access set` command to set the acl of a **file**.
+Use the [az storage fs access set](/cli/azure/storage/fs/access#az_storage_fs_access_set) command to set the acl of a **file**.
This example sets the ACL on a file for the owning user, owning group, or other users, and then prints the ACL to the console.
This section shows you how to:
### Update an ACL
-Another way to set this permission is to use the `az storage fs access set` command.
+Another way to set this permission is to use the [az storage fs access set](/cli/azure/storage/fs/access#az_storage_fs_access_set) command.
Update the ACL of a directory or file by setting the `-permissions` parameter to the short form of an ACL.
storage Static Website Content Delivery Network https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/static-website-content-delivery-network.md
You can enable Azure CDN for your static website directly from your storage acco
1. Locate your storage account in the Azure portal and display the account overview.
-1. Under the **Blob Service** menu, select **Azure CDN** to open the **Azure CDN** page:
+1. Under the **Security + Networking** menu, select **Azure CDN** to open the **Azure CDN** page:
![Create CDN endpoint](media/storage-blob-static-website-custom-domain/cdn-storage-new.png)
An object that's already cached in Azure CDN remains cached until the time-to-li
## Next steps
-(Optional) Add a custom domain to your Azure CDN endpoint. See [Tutorial: Add a custom domain to your Azure CDN endpoint](../../cdn/cdn-map-content-to-custom-domain.md).
+(Optional) Add a custom domain to your Azure CDN endpoint. See [Tutorial: Add a custom domain to your Azure CDN endpoint](../../cdn/cdn-map-content-to-custom-domain.md).
storage Storage Blob Index How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blob-index-how-to.md
static async Task BlobIndexTagsExample()
This task can be performed by a [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) or a security principal that has been given permission to the `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/filter/action` [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftstorage) via a custom Azure role.
+> [!NOTE]
+> You can't use index tags to retrieve previous versions. Tags for previous versions aren't passed to the blob index engine. For more information, see [Conditions and known issues](storage-manage-find-blobs.md#conditions-and-known-issues).
+ # [Portal](#tab/azure-portal) Within the Azure portal, the blob index tags filter automatically applies the `@container` parameter to scope your selected container. If you wish to filter and find tagged data across your entire storage account, use our REST API, SDKs, or tools.
storage Storage Manage Find Blobs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-manage-find-blobs.md
description: Learn how to use blob index tags to categorize, manage, and query f
Previously updated : 06/14/2021 Last updated : 08/25/2021
As datasets get larger, finding a specific object in a sea of data can be diffic
Blob index tags let you: - Dynamically categorize your blobs using key-value index tags+ - Quickly find specific tagged blobs across an entire storage account+ - Specify conditional behaviors for blob APIs based on the evaluation of index tags+ - Use index tags for advanced controls on features like [blob lifecycle management](storage-lifecycle-management-concepts.md) Consider a scenario where you have millions of blobs in your storage account, accessed by many different applications. You want to find all related data from a single project. You aren't sure what's in scope as the data can be spread across multiple containers with different naming conventions. However, your applications upload all data with tags based on their project. Instead of searching through millions of blobs and comparing names and properties, you can use `Project = Contoso` as your discovery criteria. Blob index will filter all containers across your entire storage account to quickly find and return just the set of 50 blobs from `Project = Contoso`.
Container and blob name prefixes are one-dimensional categorizations. Blob index
Consider the following five blobs in your storage account: - *container1/transaction.csv*+ - *container2/campaign.docx*+ - *photos/bannerphoto.png*+ - *archives/completed/2019review.pdf*+ - *logs/2020/01/01/logfile.txt* These blobs are separated using a prefix of *container/virtual folder/blob name*. You can set an index tag attribute of `Project = Contoso` on these five blobs to categorize them together while maintaining their current prefix organization. Adding index tags eliminates the need to move data by exposing the ability to filter and find data using the index.
You can apply multiple tags on your blob to be more descriptive of the data.
> "Status" = 'Unprocessed' > "Priority" = '01'
-To modify the existing index tag attributes, retrieve the existing tag attributes, modify the tag attributes, and replace with the [Set Blob Tags](/rest/api/storageservices/set-blob-tags) operation. To remove all index tags from the blob, call the `Set Blob Tags` operation with no tag attributes specified. As blob index tags are a subresource to the blob data contents, `Set Blob Tags` doesn't modify any underlying content and doesn't change the blob's last-modified-time or eTag. You can create or modify index tags for all current base blobs and previous versions. However, tags on snapshots or soft deleted blobs cannot be modified.
+To modify the existing index tag attributes, retrieve the existing tag attributes, modify the tag attributes, and replace with the [Set Blob Tags](/rest/api/storageservices/set-blob-tags) operation. To remove all index tags from the blob, call the `Set Blob Tags` operation with no tag attributes specified. As blob index tags are a subresource to the blob data contents, `Set Blob Tags` doesn't modify any underlying content and doesn't change the blob's last-modified-time or eTag. You can create or modify index tags for all current base blobs. Index tags are also preserved for previous versions but they aren't passed to the blob index engine, so you cannot query index tags to retrieve previous versions. Tags on snapshots or soft deleted blobs cannot be modified.
The following limits apply to blob index tags: - Each blob can have up to 10 blob index tags+ - Tag keys must be between one and 128 characters+ - Tag values must be between zero and 256 characters+ - Tag keys and values are case-sensitive+ - Tag keys and values only support string data types. Any numbers, dates, times, or special characters are saved as strings+ - Tag keys and values must adhere to the following naming rules:+ - Alphanumeric characters:+ - **a** through **z** (lowercase letters)+ - **A** through **Z** (uppercase letters)+ - **0** through **9** (numbers)+ - Valid special characters: space, plus, minus, period, colon, equals, underscore, forward slash (` +-.:=_/`) ## Getting and listing blob index tags
The [Find Blobs by Tags](/rest/api/storageservices/find-blobs-by-tags) operation
The following criteria applies to blob index filtering: - Tag keys should be enclosed in double quotes (")+ - Tag values and container names should be enclosed in single quotes (')+ - The @ character is only allowed for filtering on a specific container name (for example, `@container = 'ContainerName'`)+ - Filters are applied with lexicographic sorting on strings+ - Same sided range operations on the same key are invalid (for example, `"Rank" > '10' AND "Rank" >= '15'`)+ - When using REST to create a filter expression, characters should be URI encoded+ - Tag queries are optimized for equality match using a single tag (e.g. StoreID = "100"). Range queries using a single tag involving >, >=, <, <= are also efficient. Any query using AND with more than one tag will not be as efficient. For example, Cost > "01" AND Cost <= "100" is efficient. Cost > "01 AND StoreID = "2" is not as efficient. The below table shows all the valid operators for `Find Blobs by Tags`:
The following sample lifecycle management rule applies to block blobs in a conta
You can authorize access to blob index tags using one of the following approaches: - Using Azure role-based access control (Azure RBAC) to grant permissions to an Azure Active Directory (Azure AD) security principal. Use Azure AD for superior security and ease of use. For more information about using Azure AD with blob operations, see [Authorize access to data in Azure Storage](../common/authorize-data-access.md).+ - Using a shared access signature (SAS) to delegate access to blob index. For more information about shared access signatures, see [Grant limited access to Azure Storage resources using shared access signatures (SAS)](../common/storage-sas-overview.md).+ - Using the account access keys to authorize operations with Shared Key. For more information, see [Authorize with Shared Key](/rest/api/storageservices/authorize-with-shared-key). Blob index tags are a subresource to the blob data. A user with permissions or a SAS token to read or write blobs may not have access to the blob index tags.
Blob index tags are currently available in all public regions.
To get started, see [Use blob index tags to manage and find data](storage-blob-index-how-to.md). > [!IMPORTANT]
-> You must register your subscription before you can use the blob index on your storage accounts. See the [Conditions and known issues](#conditions-and-known-issues) section of this article.
+> See the [Conditions and known issues](#conditions-and-known-issues) section of this article.
## Conditions and known issues This section describes known issues and conditions. - Only general-purpose v2 accounts are supported. Premium block blob, legacy blob, and accounts with a hierarchical namespace enabled aren't supported. General-purpose v1 accounts won't be supported.+ - Uploading page blobs with index tags doesn't persist the tags. Set the tags after uploading a page blob.-- When filtering is scoped to a single container, the `@container` can only be passed if all the index tags in the filter expression are equality checks (key=value).-- When using the range operator with the `AND` condition, you can only specify the same index tag key name (`"Age" > '013' AND "Age" < '100'`).-- If Versioning is enabled, you can still use index tags on the current version. For previous versions, index tags are preserved for versions but aren't passed to the blob index engine. You cannot query index tags to retrieve previous versions.+
+- If Blob storage versioning is enabled, you can still use index tags on the current version. Index tags are preserved for previous versions, but those tags aren't passed to the blob index engine, so you cannot them to retrieve previous versions. If you promote a previous version to the current version, then the tags of that previous version become the tags of the current version. Because those tags are associated with the current version, they are passed to the blob index engine and you can query them.
+ - There is no API to determine if index tags are indexed.+ - Lifecycle management only supports equality checks with blob index match.+ - `Copy Blob` doesn't copy blob index tags from the source blob to the new destination blob. You can specify the tags you want applied to the destination blob during the copy operation. ## FAQ